Dec 08 17:40:22 crc systemd[1]: Starting Kubernetes Kubelet... Dec 08 17:40:23 crc kubenswrapper[5112]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:40:23 crc kubenswrapper[5112]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 08 17:40:23 crc kubenswrapper[5112]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:40:23 crc kubenswrapper[5112]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:40:23 crc kubenswrapper[5112]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 08 17:40:23 crc kubenswrapper[5112]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.136185 5112 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142449 5112 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142485 5112 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142494 5112 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142503 5112 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142513 5112 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142521 5112 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142531 5112 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142540 5112 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142548 5112 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142556 5112 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142567 5112 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142577 5112 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142622 5112 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142632 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142642 5112 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142651 5112 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142659 5112 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142667 5112 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142675 5112 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142682 5112 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142691 5112 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142698 5112 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142706 5112 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142714 5112 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142722 5112 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142730 5112 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142739 5112 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142747 5112 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142755 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142763 5112 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142771 5112 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142779 5112 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142789 5112 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142796 5112 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142804 5112 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142812 5112 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142820 5112 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142827 5112 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142835 5112 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142842 5112 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142850 5112 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142860 5112 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142869 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142877 5112 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142887 5112 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142895 5112 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142903 5112 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142911 5112 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142918 5112 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142926 5112 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142935 5112 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142943 5112 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142951 5112 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142958 5112 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142967 5112 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.142974 5112 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143007 5112 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143016 5112 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143026 5112 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143035 5112 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143043 5112 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143052 5112 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143061 5112 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143068 5112 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143104 5112 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143112 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143123 5112 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143135 5112 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143144 5112 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143152 5112 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143163 5112 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143171 5112 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143179 5112 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143187 5112 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143194 5112 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143215 5112 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143224 5112 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143232 5112 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143239 5112 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143247 5112 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143255 5112 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143264 5112 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143272 5112 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143279 5112 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143287 5112 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.143295 5112 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144200 5112 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144214 5112 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144222 5112 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144230 5112 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144238 5112 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144246 5112 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144253 5112 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144265 5112 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144275 5112 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144284 5112 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144292 5112 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144300 5112 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144310 5112 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144319 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144327 5112 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144336 5112 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144344 5112 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144352 5112 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144360 5112 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144369 5112 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144377 5112 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144385 5112 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144394 5112 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144401 5112 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144409 5112 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144416 5112 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144424 5112 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144434 5112 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144445 5112 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144455 5112 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144463 5112 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144470 5112 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144478 5112 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144487 5112 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144495 5112 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144502 5112 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144510 5112 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144517 5112 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144525 5112 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144534 5112 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144542 5112 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144549 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144557 5112 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144565 5112 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144573 5112 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144581 5112 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144589 5112 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144596 5112 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144604 5112 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144612 5112 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144619 5112 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144627 5112 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144634 5112 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144642 5112 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144651 5112 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144659 5112 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144667 5112 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144674 5112 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144682 5112 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144690 5112 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144698 5112 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144705 5112 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144713 5112 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144720 5112 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144729 5112 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144736 5112 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144744 5112 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144751 5112 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144764 5112 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144773 5112 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144780 5112 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144788 5112 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144795 5112 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144804 5112 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144811 5112 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144822 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144830 5112 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144838 5112 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144845 5112 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144853 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144861 5112 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144868 5112 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144876 5112 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144883 5112 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144891 5112 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.144898 5112 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145065 5112 flags.go:64] FLAG: --address="0.0.0.0" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145106 5112 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145121 5112 flags.go:64] FLAG: --anonymous-auth="true" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145132 5112 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145145 5112 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145154 5112 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145166 5112 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145177 5112 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145186 5112 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145195 5112 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145205 5112 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145214 5112 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145223 5112 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145231 5112 flags.go:64] FLAG: --cgroup-root="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145243 5112 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145253 5112 flags.go:64] FLAG: --client-ca-file="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145261 5112 flags.go:64] FLAG: --cloud-config="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145269 5112 flags.go:64] FLAG: --cloud-provider="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145278 5112 flags.go:64] FLAG: --cluster-dns="[]" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145289 5112 flags.go:64] FLAG: --cluster-domain="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145297 5112 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145306 5112 flags.go:64] FLAG: --config-dir="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145314 5112 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145327 5112 flags.go:64] FLAG: --container-log-max-files="5" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145338 5112 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145347 5112 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145356 5112 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145366 5112 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145375 5112 flags.go:64] FLAG: --contention-profiling="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145383 5112 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145392 5112 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145402 5112 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145411 5112 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145422 5112 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145432 5112 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145466 5112 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145477 5112 flags.go:64] FLAG: --enable-load-reader="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145487 5112 flags.go:64] FLAG: --enable-server="true" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145496 5112 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145507 5112 flags.go:64] FLAG: --event-burst="100" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145516 5112 flags.go:64] FLAG: --event-qps="50" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145524 5112 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145533 5112 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145542 5112 flags.go:64] FLAG: --eviction-hard="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145552 5112 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145561 5112 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145573 5112 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145582 5112 flags.go:64] FLAG: --eviction-soft="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145590 5112 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145599 5112 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145607 5112 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145616 5112 flags.go:64] FLAG: --experimental-mounter-path="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145625 5112 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145633 5112 flags.go:64] FLAG: --fail-swap-on="true" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145641 5112 flags.go:64] FLAG: --feature-gates="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145653 5112 flags.go:64] FLAG: --file-check-frequency="20s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145662 5112 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145671 5112 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145680 5112 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145688 5112 flags.go:64] FLAG: --healthz-port="10248" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145697 5112 flags.go:64] FLAG: --help="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145705 5112 flags.go:64] FLAG: --hostname-override="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145714 5112 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145723 5112 flags.go:64] FLAG: --http-check-frequency="20s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145732 5112 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145740 5112 flags.go:64] FLAG: --image-credential-provider-config="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145748 5112 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145759 5112 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145769 5112 flags.go:64] FLAG: --image-service-endpoint="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145777 5112 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145800 5112 flags.go:64] FLAG: --kube-api-burst="100" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145809 5112 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145818 5112 flags.go:64] FLAG: --kube-api-qps="50" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145827 5112 flags.go:64] FLAG: --kube-reserved="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145836 5112 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145844 5112 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145853 5112 flags.go:64] FLAG: --kubelet-cgroups="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145862 5112 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145873 5112 flags.go:64] FLAG: --lock-file="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145881 5112 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145890 5112 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145899 5112 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145912 5112 flags.go:64] FLAG: --log-json-split-stream="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145920 5112 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145929 5112 flags.go:64] FLAG: --log-text-split-stream="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145937 5112 flags.go:64] FLAG: --logging-format="text" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145945 5112 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145955 5112 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145964 5112 flags.go:64] FLAG: --manifest-url="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145972 5112 flags.go:64] FLAG: --manifest-url-header="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145983 5112 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.145992 5112 flags.go:64] FLAG: --max-open-files="1000000" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146002 5112 flags.go:64] FLAG: --max-pods="110" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146011 5112 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146020 5112 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146029 5112 flags.go:64] FLAG: --memory-manager-policy="None" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146038 5112 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146047 5112 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146057 5112 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146066 5112 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146132 5112 flags.go:64] FLAG: --node-status-max-images="50" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146141 5112 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146150 5112 flags.go:64] FLAG: --oom-score-adj="-999" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146158 5112 flags.go:64] FLAG: --pod-cidr="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146167 5112 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146181 5112 flags.go:64] FLAG: --pod-manifest-path="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146189 5112 flags.go:64] FLAG: --pod-max-pids="-1" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146198 5112 flags.go:64] FLAG: --pods-per-core="0" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146207 5112 flags.go:64] FLAG: --port="10250" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146216 5112 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146228 5112 flags.go:64] FLAG: --provider-id="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146236 5112 flags.go:64] FLAG: --qos-reserved="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146245 5112 flags.go:64] FLAG: --read-only-port="10255" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146253 5112 flags.go:64] FLAG: --register-node="true" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146262 5112 flags.go:64] FLAG: --register-schedulable="true" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146271 5112 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146285 5112 flags.go:64] FLAG: --registry-burst="10" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146294 5112 flags.go:64] FLAG: --registry-qps="5" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146302 5112 flags.go:64] FLAG: --reserved-cpus="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146311 5112 flags.go:64] FLAG: --reserved-memory="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146321 5112 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146330 5112 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146338 5112 flags.go:64] FLAG: --rotate-certificates="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146348 5112 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146357 5112 flags.go:64] FLAG: --runonce="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146365 5112 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146374 5112 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146383 5112 flags.go:64] FLAG: --seccomp-default="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146392 5112 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146400 5112 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146409 5112 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146418 5112 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146426 5112 flags.go:64] FLAG: --storage-driver-password="root" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146436 5112 flags.go:64] FLAG: --storage-driver-secure="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146445 5112 flags.go:64] FLAG: --storage-driver-table="stats" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146454 5112 flags.go:64] FLAG: --storage-driver-user="root" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146462 5112 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146471 5112 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146480 5112 flags.go:64] FLAG: --system-cgroups="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146488 5112 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146502 5112 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146510 5112 flags.go:64] FLAG: --tls-cert-file="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146522 5112 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146533 5112 flags.go:64] FLAG: --tls-min-version="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146541 5112 flags.go:64] FLAG: --tls-private-key-file="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146550 5112 flags.go:64] FLAG: --topology-manager-policy="none" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146558 5112 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146567 5112 flags.go:64] FLAG: --topology-manager-scope="container" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146575 5112 flags.go:64] FLAG: --v="2" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146586 5112 flags.go:64] FLAG: --version="false" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146597 5112 flags.go:64] FLAG: --vmodule="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146607 5112 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.146616 5112 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146829 5112 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146839 5112 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146849 5112 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146857 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146865 5112 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146873 5112 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146881 5112 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146890 5112 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146897 5112 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146905 5112 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146916 5112 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146924 5112 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146932 5112 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146941 5112 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146949 5112 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146956 5112 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146964 5112 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.146993 5112 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147002 5112 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147010 5112 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147018 5112 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147028 5112 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147037 5112 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147045 5112 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147053 5112 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147061 5112 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147070 5112 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147111 5112 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147123 5112 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147133 5112 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147143 5112 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147153 5112 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147163 5112 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147171 5112 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147179 5112 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147187 5112 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147195 5112 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147203 5112 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147211 5112 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147219 5112 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147226 5112 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147234 5112 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147246 5112 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147256 5112 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147266 5112 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147274 5112 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147285 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147293 5112 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147301 5112 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147309 5112 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147316 5112 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147327 5112 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147337 5112 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147348 5112 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147357 5112 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147365 5112 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147373 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147382 5112 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147389 5112 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147397 5112 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147405 5112 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147413 5112 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147421 5112 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147428 5112 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147436 5112 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147444 5112 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147452 5112 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147459 5112 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147467 5112 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147474 5112 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147482 5112 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147490 5112 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147498 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147505 5112 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147516 5112 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147524 5112 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147531 5112 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147539 5112 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147547 5112 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147556 5112 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147564 5112 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147572 5112 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147580 5112 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147587 5112 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147595 5112 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.147604 5112 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.147628 5112 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.158246 5112 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.158285 5112 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158347 5112 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158356 5112 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158361 5112 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158365 5112 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158370 5112 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158375 5112 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158380 5112 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158384 5112 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158388 5112 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158392 5112 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158399 5112 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158404 5112 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158408 5112 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158412 5112 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158416 5112 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158420 5112 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158424 5112 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158428 5112 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158432 5112 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158436 5112 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158441 5112 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158445 5112 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158450 5112 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158454 5112 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158458 5112 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158462 5112 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158467 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158471 5112 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158476 5112 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158480 5112 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158486 5112 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158493 5112 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158498 5112 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158502 5112 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158506 5112 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158510 5112 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158514 5112 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158519 5112 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158523 5112 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158527 5112 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158531 5112 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158535 5112 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158540 5112 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158544 5112 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158549 5112 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158553 5112 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158557 5112 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158561 5112 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158566 5112 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158570 5112 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158574 5112 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158578 5112 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158583 5112 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158587 5112 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158591 5112 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158595 5112 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158599 5112 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158604 5112 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158608 5112 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158612 5112 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158616 5112 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158620 5112 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158625 5112 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158629 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158634 5112 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158639 5112 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158643 5112 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158647 5112 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158652 5112 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158656 5112 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158660 5112 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158664 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158668 5112 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158672 5112 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158676 5112 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158681 5112 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158686 5112 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158692 5112 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158700 5112 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158705 5112 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158710 5112 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158714 5112 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158719 5112 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158723 5112 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158727 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158732 5112 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.158739 5112 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158865 5112 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158875 5112 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158880 5112 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158884 5112 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158888 5112 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158893 5112 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158898 5112 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158903 5112 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158907 5112 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158912 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158917 5112 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158921 5112 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158926 5112 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158930 5112 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158934 5112 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158938 5112 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158942 5112 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158946 5112 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158975 5112 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158980 5112 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158984 5112 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158989 5112 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158993 5112 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.158999 5112 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159006 5112 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159010 5112 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159016 5112 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159021 5112 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159025 5112 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159029 5112 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159033 5112 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159037 5112 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159042 5112 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159046 5112 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159050 5112 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159054 5112 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159058 5112 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159063 5112 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159067 5112 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159071 5112 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159092 5112 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159096 5112 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159101 5112 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159107 5112 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159111 5112 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159115 5112 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159120 5112 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159124 5112 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159128 5112 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159132 5112 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159136 5112 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159142 5112 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159147 5112 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159152 5112 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159157 5112 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159161 5112 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159165 5112 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159169 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159175 5112 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159179 5112 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159183 5112 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159187 5112 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159191 5112 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159195 5112 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159200 5112 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159204 5112 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159208 5112 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159212 5112 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159216 5112 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159221 5112 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159225 5112 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159229 5112 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159233 5112 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159237 5112 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159241 5112 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159245 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159251 5112 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159255 5112 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159260 5112 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159264 5112 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159269 5112 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159273 5112 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159277 5112 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159281 5112 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159286 5112 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.159290 5112 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.159299 5112 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.159701 5112 server.go:962] "Client rotation is on, will bootstrap in background" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.162432 5112 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.168029 5112 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.168240 5112 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.169369 5112 server.go:1019] "Starting client certificate rotation" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.169523 5112 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.169599 5112 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.178991 5112 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.180609 5112 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.182414 5112 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.190473 5112 log.go:25] "Validated CRI v1 runtime API" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.210694 5112 log.go:25] "Validated CRI v1 image API" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.212536 5112 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.215434 5112 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-08-17-34-18-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.215465 5112 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.231945 5112 manager.go:217] Machine: {Timestamp:2025-12-08 17:40:23.230352992 +0000 UTC m=+0.239901703 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:b5fe6617-167d-4502-9bb8-e694c6fec87c BootID:1bfc9941-22f6-447c-a313-68da2bceb39a Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:95:6b:fa Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:95:6b:fa Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:79:ec:a0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b7:84:ae Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ea:1e:d2 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:61:d2:bb Speed:-1 Mtu:1496} {Name:eth10 MacAddress:8e:03:9c:2d:cd:b6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e2:9d:11:e5:f0:14 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.232234 5112 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.232518 5112 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.233881 5112 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.233933 5112 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.234213 5112 topology_manager.go:138] "Creating topology manager with none policy" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.234226 5112 container_manager_linux.go:306] "Creating device plugin manager" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.234254 5112 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.234485 5112 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.234691 5112 state_mem.go:36] "Initialized new in-memory state store" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.234855 5112 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.235462 5112 kubelet.go:491] "Attempting to sync node with API server" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.235480 5112 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.235494 5112 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.235505 5112 kubelet.go:397] "Adding apiserver pod source" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.235520 5112 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.237153 5112 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.237171 5112 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.240143 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.240692 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.242279 5112 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.242322 5112 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.244239 5112 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.244448 5112 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245018 5112 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245463 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245484 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245490 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245497 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245504 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245510 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245516 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245523 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245530 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245541 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245551 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245717 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245963 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.245983 5112 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.246670 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.257491 5112 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.257577 5112 server.go:1295] "Started kubelet" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.257709 5112 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.257758 5112 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.257844 5112 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.258557 5112 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 08 17:40:23 crc systemd[1]: Started Kubernetes Kubelet. Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.258905 5112 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f4e41c03c1221 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.257526817 +0000 UTC m=+0.267075518,LastTimestamp:2025-12-08 17:40:23.257526817 +0000 UTC m=+0.267075518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.259988 5112 server.go:317] "Adding debug handlers to kubelet server" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.260406 5112 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.260876 5112 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.261104 5112 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.261128 5112 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.261266 5112 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.261259 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.262064 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="200ms" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.262120 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.267738 5112 factory.go:55] Registering systemd factory Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.267785 5112 factory.go:223] Registration of the systemd container factory successfully Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.268197 5112 factory.go:153] Registering CRI-O factory Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.268221 5112 factory.go:223] Registration of the crio container factory successfully Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.268294 5112 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.268320 5112 factory.go:103] Registering Raw factory Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.268337 5112 manager.go:1196] Started watching for new ooms in manager Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.269059 5112 manager.go:319] Starting recovery of all containers Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290335 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290385 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290395 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290403 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290411 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290420 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290428 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290436 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290445 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290454 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290478 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290487 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290497 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290508 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290521 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290538 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290548 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290559 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290568 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290577 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290587 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290596 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290606 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290615 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290626 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290636 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290667 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290677 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290702 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290714 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290743 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290754 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290769 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290779 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290789 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290800 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290811 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290821 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290832 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290843 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290853 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290865 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290876 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290886 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290896 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290906 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290916 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290927 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290937 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290949 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290978 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.290989 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291000 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291011 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291022 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291032 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291048 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291058 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291070 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291166 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291178 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291190 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291202 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291213 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291223 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291231 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291241 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291251 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291260 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291269 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291279 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291289 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291298 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291316 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291328 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291337 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291347 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291357 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291368 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291379 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291390 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291402 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291413 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291423 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291435 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291445 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291456 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291467 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291478 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291489 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291499 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291509 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291520 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291530 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291539 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291569 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291579 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291589 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291638 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291662 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291672 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291682 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291693 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291702 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291712 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291721 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291730 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291740 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291750 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291760 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291770 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291781 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291800 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291808 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291815 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291824 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291833 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291842 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291851 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291860 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291869 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291879 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291890 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291902 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291913 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291925 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291935 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291946 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291958 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291969 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291980 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.291991 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.292001 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.292012 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.292022 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.292032 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.292911 5112 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.292941 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.292954 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.292965 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.292976 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293008 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293020 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293037 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293049 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293060 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293119 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293134 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293146 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293159 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293179 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293191 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293202 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293219 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293227 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293235 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293244 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293252 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293262 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293271 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293280 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293287 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293296 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293304 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293312 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293322 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293341 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293357 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293366 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293374 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293381 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293390 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293397 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293406 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293415 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293438 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293446 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293454 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293462 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293470 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293479 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293489 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293497 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293506 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293515 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293523 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293532 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293540 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293548 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293556 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293563 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293573 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293586 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293598 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293606 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293614 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293622 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293630 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293638 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293646 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293655 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293663 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293671 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293683 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293696 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293706 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293715 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293726 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293735 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293743 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293752 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293760 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293767 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293783 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293806 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293844 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293853 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293862 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293869 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293877 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293885 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293893 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293902 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293911 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293920 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293970 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.293994 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294003 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294012 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294021 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294030 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294037 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294045 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294053 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294061 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294070 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294131 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294145 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294153 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294161 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294169 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294177 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294185 5112 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294193 5112 reconstruct.go:97] "Volume reconstruction finished" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294199 5112 reconciler.go:26] "Reconciler: start to sync state" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.294208 5112 manager.go:324] Recovery completed Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.308987 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.310790 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.311028 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.311045 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.312516 5112 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.312537 5112 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.312568 5112 state_mem.go:36] "Initialized new in-memory state store" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.312716 5112 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.315358 5112 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.315399 5112 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.315424 5112 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.315433 5112 kubelet.go:2451] "Starting kubelet main sync loop" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.315674 5112 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.316633 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.321438 5112 policy_none.go:49] "None policy: Start" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.321466 5112 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.321490 5112 state_mem.go:35] "Initializing new in-memory state store" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.362051 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.371798 5112 manager.go:341] "Starting Device Plugin manager" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.371868 5112 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.371893 5112 server.go:85] "Starting device plugin registration server" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.372360 5112 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.372381 5112 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.372721 5112 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.372817 5112 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.372830 5112 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.377812 5112 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.377884 5112 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.416722 5112 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.416953 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.417825 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.417871 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.417884 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.418488 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.418708 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.418752 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.419170 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.419199 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.419211 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.419301 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.419338 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.419351 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.419767 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.419848 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.419880 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.420166 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.420191 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.420199 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.420245 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.420272 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.420282 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.420815 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.420894 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.420923 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.421186 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.421210 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.421219 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.421284 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.421302 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.421315 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.421743 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.421983 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.422016 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.422154 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.422189 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.422203 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.422431 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.422454 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.422462 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.422893 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.422924 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.423322 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.423352 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.423365 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.447387 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.453430 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.462740 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="400ms" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.472865 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.473577 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.473617 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.473630 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.473655 5112 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.474101 5112 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.477892 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.494801 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.495650 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.495831 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.495855 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.495871 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.495887 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.495901 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.495914 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.495928 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.495962 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496003 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496038 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496060 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496099 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496119 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496140 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496160 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496180 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496198 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496216 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496236 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496263 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496284 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496303 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496396 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496476 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496485 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496584 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496683 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496779 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.496961 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.501429 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.597887 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.597960 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.597992 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598029 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598063 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598071 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598134 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598170 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598200 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598121 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598148 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598256 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598277 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598225 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598325 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598374 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598402 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598428 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598457 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598459 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598480 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598519 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598522 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598518 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598544 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598552 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598566 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598584 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598615 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598643 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598683 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.598730 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.674468 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.675736 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.675779 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.675789 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.675811 5112 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.676250 5112 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.748587 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.754719 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.773605 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-ab900d148c4ea8d41b6b24eb5242679167d561813040b7175c0d1b23f4298c06 WatchSource:0}: Error finding container ab900d148c4ea8d41b6b24eb5242679167d561813040b7175c0d1b23f4298c06: Status 404 returned error can't find the container with id ab900d148c4ea8d41b6b24eb5242679167d561813040b7175c0d1b23f4298c06 Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.775137 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-f1b3f3e56ee28aeec32ec679c4c14d5dc0f4f78eca1a02e0732ebe32b678e930 WatchSource:0}: Error finding container f1b3f3e56ee28aeec32ec679c4c14d5dc0f4f78eca1a02e0732ebe32b678e930: Status 404 returned error can't find the container with id f1b3f3e56ee28aeec32ec679c4c14d5dc0f4f78eca1a02e0732ebe32b678e930 Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.776930 5112 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.778599 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.793670 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-03a01fb3c5c6f2d52a57b7979f5a6e27f10798341278ba7870b1415e1ac7324b WatchSource:0}: Error finding container 03a01fb3c5c6f2d52a57b7979f5a6e27f10798341278ba7870b1415e1ac7324b: Status 404 returned error can't find the container with id 03a01fb3c5c6f2d52a57b7979f5a6e27f10798341278ba7870b1415e1ac7324b Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.795105 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: I1208 17:40:23.801997 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:23 crc kubenswrapper[5112]: W1208 17:40:23.807057 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-36301ee311aa4176f38a0e955e5fdbd4eda3534f9bc80d15027a320d51374c3c WatchSource:0}: Error finding container 36301ee311aa4176f38a0e955e5fdbd4eda3534f9bc80d15027a320d51374c3c: Status 404 returned error can't find the container with id 36301ee311aa4176f38a0e955e5fdbd4eda3534f9bc80d15027a320d51374c3c Dec 08 17:40:23 crc kubenswrapper[5112]: E1208 17:40:23.863605 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="800ms" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.081618 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.085335 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.085426 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.085442 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.085470 5112 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:24 crc kubenswrapper[5112]: E1208 17:40:24.085969 5112 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Dec 08 17:40:24 crc kubenswrapper[5112]: E1208 17:40:24.230135 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.247872 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Dec 08 17:40:24 crc kubenswrapper[5112]: E1208 17:40:24.262580 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.323169 5112 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397" exitCode=0 Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.323299 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397"} Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.323382 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"36301ee311aa4176f38a0e955e5fdbd4eda3534f9bc80d15027a320d51374c3c"} Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.323525 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.324307 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.324353 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.324367 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:24 crc kubenswrapper[5112]: E1208 17:40:24.324586 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.325808 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.326125 5112 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6" exitCode=0 Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.326198 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6"} Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.326260 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"03a01fb3c5c6f2d52a57b7979f5a6e27f10798341278ba7870b1415e1ac7324b"} Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.326442 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.326516 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.326541 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.326551 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.328846 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.328873 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.328883 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:24 crc kubenswrapper[5112]: E1208 17:40:24.329124 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.329168 5112 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82" exitCode=0 Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.329238 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82"} Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.329254 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"ab900d148c4ea8d41b6b24eb5242679167d561813040b7175c0d1b23f4298c06"} Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.329330 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.329836 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.329857 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.329867 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:24 crc kubenswrapper[5112]: E1208 17:40:24.330012 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.331706 5112 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186" exitCode=0 Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.331765 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186"} Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.331788 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"f1b3f3e56ee28aeec32ec679c4c14d5dc0f4f78eca1a02e0732ebe32b678e930"} Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.331858 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.333043 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.333074 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.333102 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:24 crc kubenswrapper[5112]: E1208 17:40:24.333589 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.337194 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c"} Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.337226 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9dc481378b373e86ea29518adbe09eb3e52a2f18af97c0fbc31b5a556d6df5b3"} Dec 08 17:40:24 crc kubenswrapper[5112]: E1208 17:40:24.595553 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:40:24 crc kubenswrapper[5112]: E1208 17:40:24.666652 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="1.6s" Dec 08 17:40:24 crc kubenswrapper[5112]: E1208 17:40:24.833230 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.887073 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.890272 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.890316 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.890329 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:24 crc kubenswrapper[5112]: I1208 17:40:24.890356 5112 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.304363 5112 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.343394 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c"} Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.343447 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d"} Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.347351 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0"} Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.347392 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1"} Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.349067 5112 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1" exitCode=0 Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.349149 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1"} Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.349377 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.350598 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.350630 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.350640 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.350786 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70"} Dec 08 17:40:25 crc kubenswrapper[5112]: E1208 17:40:25.350836 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.350931 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.352248 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.352272 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.352284 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:25 crc kubenswrapper[5112]: E1208 17:40:25.352566 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.355106 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb"} Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.355148 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f"} Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.355166 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac"} Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.355383 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.356302 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.356348 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.356371 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:25 crc kubenswrapper[5112]: E1208 17:40:25.356705 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:25 crc kubenswrapper[5112]: I1208 17:40:25.683704 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.384194 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483"} Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.384357 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.385658 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.385739 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.385771 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:26 crc kubenswrapper[5112]: E1208 17:40:26.386205 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.389683 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"464a4e10d8ff56b45cb38a25371b700b53ade63b40535c26b880f39ce81f1a0c"} Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.389753 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa"} Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.389791 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4"} Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.389951 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.390973 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.391034 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.391059 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:26 crc kubenswrapper[5112]: E1208 17:40:26.391458 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.392212 5112 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878" exitCode=0 Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.392291 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878"} Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.392385 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.392577 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.392581 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.393168 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.393221 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.393248 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.393325 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.393389 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.393415 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.393537 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.393590 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:26 crc kubenswrapper[5112]: I1208 17:40:26.393616 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:26 crc kubenswrapper[5112]: E1208 17:40:26.393727 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:26 crc kubenswrapper[5112]: E1208 17:40:26.393764 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:26 crc kubenswrapper[5112]: E1208 17:40:26.395134 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.327167 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.399700 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a"} Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.399749 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb"} Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.399764 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c"} Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.399774 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0"} Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.399791 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.400010 5112 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.400117 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.400523 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.400559 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.400571 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:27 crc kubenswrapper[5112]: E1208 17:40:27.400942 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.400975 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.401015 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.401030 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:27 crc kubenswrapper[5112]: E1208 17:40:27.401456 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:27 crc kubenswrapper[5112]: I1208 17:40:27.484986 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.227329 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.409016 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8"} Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.409230 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.409311 5112 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.409401 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.409804 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.410149 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.410202 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.410229 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.410586 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.410626 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.410649 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.410641 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:28 crc kubenswrapper[5112]: E1208 17:40:28.410807 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.410826 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:28 crc kubenswrapper[5112]: I1208 17:40:28.410853 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:28 crc kubenswrapper[5112]: E1208 17:40:28.411002 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:28 crc kubenswrapper[5112]: E1208 17:40:28.411838 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.135833 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.411812 5112 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.411903 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.411921 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.413005 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.413061 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.413121 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.413195 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.413235 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.413255 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:29 crc kubenswrapper[5112]: E1208 17:40:29.413780 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:29 crc kubenswrapper[5112]: E1208 17:40:29.414335 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.844549 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.844886 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.846225 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.846280 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:29 crc kubenswrapper[5112]: I1208 17:40:29.846293 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:29 crc kubenswrapper[5112]: E1208 17:40:29.846711 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:30 crc kubenswrapper[5112]: I1208 17:40:30.351286 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:30 crc kubenswrapper[5112]: I1208 17:40:30.415647 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:30 crc kubenswrapper[5112]: I1208 17:40:30.415672 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:30 crc kubenswrapper[5112]: I1208 17:40:30.416815 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:30 crc kubenswrapper[5112]: I1208 17:40:30.416878 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:30 crc kubenswrapper[5112]: I1208 17:40:30.416903 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:30 crc kubenswrapper[5112]: I1208 17:40:30.416911 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:30 crc kubenswrapper[5112]: I1208 17:40:30.416950 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:30 crc kubenswrapper[5112]: I1208 17:40:30.416978 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:30 crc kubenswrapper[5112]: E1208 17:40:30.417613 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:30 crc kubenswrapper[5112]: E1208 17:40:30.417941 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:31 crc kubenswrapper[5112]: I1208 17:40:31.228133 5112 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:40:31 crc kubenswrapper[5112]: I1208 17:40:31.228293 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:40:33 crc kubenswrapper[5112]: I1208 17:40:33.008575 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:33 crc kubenswrapper[5112]: I1208 17:40:33.008918 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:33 crc kubenswrapper[5112]: I1208 17:40:33.010583 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:33 crc kubenswrapper[5112]: I1208 17:40:33.010673 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:33 crc kubenswrapper[5112]: I1208 17:40:33.010702 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:33 crc kubenswrapper[5112]: E1208 17:40:33.011348 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:33 crc kubenswrapper[5112]: E1208 17:40:33.378070 5112 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:40:34 crc kubenswrapper[5112]: E1208 17:40:34.892318 5112 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 08 17:40:35 crc kubenswrapper[5112]: I1208 17:40:35.247849 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 08 17:40:35 crc kubenswrapper[5112]: E1208 17:40:35.306573 5112 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 17:40:35 crc kubenswrapper[5112]: I1208 17:40:35.788596 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:35 crc kubenswrapper[5112]: I1208 17:40:35.788831 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:35 crc kubenswrapper[5112]: I1208 17:40:35.789614 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:35 crc kubenswrapper[5112]: I1208 17:40:35.789648 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:35 crc kubenswrapper[5112]: I1208 17:40:35.789659 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:35 crc kubenswrapper[5112]: E1208 17:40:35.789957 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:35 crc kubenswrapper[5112]: I1208 17:40:35.794174 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.147708 5112 trace.go:236] Trace[484148811]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:40:26.145) (total time: 10002ms): Dec 08 17:40:36 crc kubenswrapper[5112]: Trace[484148811]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (17:40:36.147) Dec 08 17:40:36 crc kubenswrapper[5112]: Trace[484148811]: [10.002093967s] [10.002093967s] END Dec 08 17:40:36 crc kubenswrapper[5112]: E1208 17:40:36.147772 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.259013 5112 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.259127 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 17:40:36 crc kubenswrapper[5112]: E1208 17:40:36.268131 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.271018 5112 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.271161 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.433157 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.433964 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.434005 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.434015 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:36 crc kubenswrapper[5112]: E1208 17:40:36.434436 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.437708 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.493059 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.494008 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.494091 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.494106 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.494139 5112 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.533728 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.533948 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.534765 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.534792 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.534801 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:36 crc kubenswrapper[5112]: E1208 17:40:36.535183 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:36 crc kubenswrapper[5112]: I1208 17:40:36.575330 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 08 17:40:37 crc kubenswrapper[5112]: I1208 17:40:37.357729 5112 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]log ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]etcd ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/priority-and-fairness-filter ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/start-apiextensions-informers ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/start-apiextensions-controllers ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/crd-informer-synced ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/start-system-namespaces-controller ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 08 17:40:37 crc kubenswrapper[5112]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/bootstrap-controller ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/start-kube-aggregator-informers ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/apiservice-registration-controller ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/apiservice-discovery-controller ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]autoregister-completion ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/apiservice-openapi-controller ok Dec 08 17:40:37 crc kubenswrapper[5112]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 08 17:40:37 crc kubenswrapper[5112]: livez check failed Dec 08 17:40:37 crc kubenswrapper[5112]: I1208 17:40:37.357790 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:40:37 crc kubenswrapper[5112]: I1208 17:40:37.434628 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:37 crc kubenswrapper[5112]: I1208 17:40:37.434659 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:37 crc kubenswrapper[5112]: I1208 17:40:37.435246 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:37 crc kubenswrapper[5112]: I1208 17:40:37.435281 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:37 crc kubenswrapper[5112]: I1208 17:40:37.435290 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:37 crc kubenswrapper[5112]: E1208 17:40:37.435536 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:37 crc kubenswrapper[5112]: I1208 17:40:37.435607 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:37 crc kubenswrapper[5112]: I1208 17:40:37.435647 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:37 crc kubenswrapper[5112]: I1208 17:40:37.435657 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:37 crc kubenswrapper[5112]: E1208 17:40:37.436034 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:37 crc kubenswrapper[5112]: I1208 17:40:37.446236 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 08 17:40:38 crc kubenswrapper[5112]: I1208 17:40:38.437805 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:38 crc kubenswrapper[5112]: I1208 17:40:38.438989 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:38 crc kubenswrapper[5112]: I1208 17:40:38.439036 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:38 crc kubenswrapper[5112]: I1208 17:40:38.439055 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:38 crc kubenswrapper[5112]: E1208 17:40:38.439916 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:39 crc kubenswrapper[5112]: E1208 17:40:39.477741 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Dec 08 17:40:39 crc kubenswrapper[5112]: I1208 17:40:39.690802 5112 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:40:39 crc kubenswrapper[5112]: I1208 17:40:39.715817 5112 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:40:40 crc kubenswrapper[5112]: I1208 17:40:40.416947 5112 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 17:40:40 crc kubenswrapper[5112]: I1208 17:40:40.417037 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 17:40:41 crc kubenswrapper[5112]: I1208 17:40:41.228982 5112 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:40:41 crc kubenswrapper[5112]: I1208 17:40:41.229121 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:40:41 crc kubenswrapper[5112]: I1208 17:40:41.260753 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:41 crc kubenswrapper[5112]: I1208 17:40:41.261143 5112 trace.go:236] Trace[2130059637]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:40:27.131) (total time: 14129ms): Dec 08 17:40:41 crc kubenswrapper[5112]: Trace[2130059637]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 14129ms (17:40:41.261) Dec 08 17:40:41 crc kubenswrapper[5112]: Trace[2130059637]: [14.129696083s] [14.129696083s] END Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.261181 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:40:41 crc kubenswrapper[5112]: I1208 17:40:41.261341 5112 trace.go:236] Trace[1127066606]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:40:27.106) (total time: 14154ms): Dec 08 17:40:41 crc kubenswrapper[5112]: Trace[1127066606]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 14154ms (17:40:41.261) Dec 08 17:40:41 crc kubenswrapper[5112]: Trace[1127066606]: [14.154389252s] [14.154389252s] END Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.261404 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.261932 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c03c1221 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.257526817 +0000 UTC m=+0.267075518,LastTimestamp:2025-12-08 17:40:23.257526817 +0000 UTC m=+0.267075518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: I1208 17:40:41.262293 5112 trace.go:236] Trace[1197369125]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:40:26.714) (total time: 14548ms): Dec 08 17:40:41 crc kubenswrapper[5112]: Trace[1197369125]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 14548ms (17:40:41.262) Dec 08 17:40:41 crc kubenswrapper[5112]: Trace[1197369125]: [14.548151011s] [14.548151011s] END Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.262484 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.264608 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c110f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311003919 +0000 UTC m=+0.320552630,LastTimestamp:2025-12-08 17:40:23.311003919 +0000 UTC m=+0.320552630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.266595 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c95f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311037937 +0000 UTC m=+0.320586657,LastTimestamp:2025-12-08 17:40:23.311037937 +0000 UTC m=+0.320586657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.271582 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36ccb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311051605 +0000 UTC m=+0.320600316,LastTimestamp:2025-12-08 17:40:23.311051605 +0000 UTC m=+0.320600316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.277941 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c7d12318 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.384736536 +0000 UTC m=+0.394285237,LastTimestamp:2025-12-08 17:40:23.384736536 +0000 UTC m=+0.394285237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.286199 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c110f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c110f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311003919 +0000 UTC m=+0.320552630,LastTimestamp:2025-12-08 17:40:23.417851102 +0000 UTC m=+0.427399803,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.292140 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c95f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c95f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311037937 +0000 UTC m=+0.320586657,LastTimestamp:2025-12-08 17:40:23.41787913 +0000 UTC m=+0.427427831,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.301624 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36ccb55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36ccb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311051605 +0000 UTC m=+0.320600316,LastTimestamp:2025-12-08 17:40:23.417889379 +0000 UTC m=+0.427438080,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.307743 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c110f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c110f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311003919 +0000 UTC m=+0.320552630,LastTimestamp:2025-12-08 17:40:23.419186013 +0000 UTC m=+0.428734724,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.316144 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c95f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c95f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311037937 +0000 UTC m=+0.320586657,LastTimestamp:2025-12-08 17:40:23.419206061 +0000 UTC m=+0.428754762,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.325272 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36ccb55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36ccb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311051605 +0000 UTC m=+0.320600316,LastTimestamp:2025-12-08 17:40:23.41921737 +0000 UTC m=+0.428766071,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.333332 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c110f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c110f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311003919 +0000 UTC m=+0.320552630,LastTimestamp:2025-12-08 17:40:23.419319592 +0000 UTC m=+0.428868293,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.342900 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c95f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c95f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311037937 +0000 UTC m=+0.320586657,LastTimestamp:2025-12-08 17:40:23.419345889 +0000 UTC m=+0.428894590,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.348844 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36ccb55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36ccb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311051605 +0000 UTC m=+0.320600316,LastTimestamp:2025-12-08 17:40:23.419356489 +0000 UTC m=+0.428905189,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.353947 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c110f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c110f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311003919 +0000 UTC m=+0.320552630,LastTimestamp:2025-12-08 17:40:23.420179831 +0000 UTC m=+0.429728532,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.359590 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c95f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c95f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311037937 +0000 UTC m=+0.320586657,LastTimestamp:2025-12-08 17:40:23.42019576 +0000 UTC m=+0.429744451,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.366334 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36ccb55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36ccb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311051605 +0000 UTC m=+0.320600316,LastTimestamp:2025-12-08 17:40:23.420203739 +0000 UTC m=+0.429752440,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.369274 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c110f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c110f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311003919 +0000 UTC m=+0.320552630,LastTimestamp:2025-12-08 17:40:23.420265434 +0000 UTC m=+0.429814135,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.373856 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c95f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c95f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311037937 +0000 UTC m=+0.320586657,LastTimestamp:2025-12-08 17:40:23.420277963 +0000 UTC m=+0.429826664,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.379207 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36ccb55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36ccb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311051605 +0000 UTC m=+0.320600316,LastTimestamp:2025-12-08 17:40:23.420287502 +0000 UTC m=+0.429836203,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.383129 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c110f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c110f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311003919 +0000 UTC m=+0.320552630,LastTimestamp:2025-12-08 17:40:23.421200557 +0000 UTC m=+0.430749258,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.387694 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c95f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c95f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311037937 +0000 UTC m=+0.320586657,LastTimestamp:2025-12-08 17:40:23.421215656 +0000 UTC m=+0.430764357,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.393273 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36ccb55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36ccb55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311051605 +0000 UTC m=+0.320600316,LastTimestamp:2025-12-08 17:40:23.421223665 +0000 UTC m=+0.430772366,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.397649 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c110f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c110f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311003919 +0000 UTC m=+0.320552630,LastTimestamp:2025-12-08 17:40:23.421295789 +0000 UTC m=+0.430844490,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.404307 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e41c36c95f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e41c36c95f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.311037937 +0000 UTC m=+0.320586657,LastTimestamp:2025-12-08 17:40:23.421309128 +0000 UTC m=+0.430857829,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.409074 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e41df383dac openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.777369516 +0000 UTC m=+0.786918217,LastTimestamp:2025-12-08 17:40:23.777369516 +0000 UTC m=+0.786918217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.413965 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e41df624272 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.78012325 +0000 UTC m=+0.789671951,LastTimestamp:2025-12-08 17:40:23.78012325 +0000 UTC m=+0.789671951,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.417763 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e41e051694d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.795796301 +0000 UTC m=+0.805345002,LastTimestamp:2025-12-08 17:40:23.795796301 +0000 UTC m=+0.805345002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.421393 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e41e15ba65a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.813244506 +0000 UTC m=+0.822793207,LastTimestamp:2025-12-08 17:40:23.813244506 +0000 UTC m=+0.822793207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.425676 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e41e1cb67c1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:23.820568513 +0000 UTC m=+0.830117204,LastTimestamp:2025-12-08 17:40:23.820568513 +0000 UTC m=+0.830117204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.430153 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e41fa05acbe openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.227040446 +0000 UTC m=+1.236589147,LastTimestamp:2025-12-08 17:40:24.227040446 +0000 UTC m=+1.236589147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.433981 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e41fa07c057 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.227176535 +0000 UTC m=+1.236725226,LastTimestamp:2025-12-08 17:40:24.227176535 +0000 UTC m=+1.236725226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.438287 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e41fa3d8ccd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.230702285 +0000 UTC m=+1.240250986,LastTimestamp:2025-12-08 17:40:24.230702285 +0000 UTC m=+1.240250986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.442472 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e41fa3e3efd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.230747901 +0000 UTC m=+1.240296612,LastTimestamp:2025-12-08 17:40:24.230747901 +0000 UTC m=+1.240296612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.447305 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e41fa4537ff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.231204863 +0000 UTC m=+1.240753564,LastTimestamp:2025-12-08 17:40:24.231204863 +0000 UTC m=+1.240753564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.450967 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e41faa93756 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.237758294 +0000 UTC m=+1.247306995,LastTimestamp:2025-12-08 17:40:24.237758294 +0000 UTC m=+1.247306995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.456658 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e41fab3e2fd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.238457597 +0000 UTC m=+1.248006298,LastTimestamp:2025-12-08 17:40:24.238457597 +0000 UTC m=+1.248006298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.460891 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e41fabcb493 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.239035539 +0000 UTC m=+1.248584240,LastTimestamp:2025-12-08 17:40:24.239035539 +0000 UTC m=+1.248584240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.466533 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e41fb1a5941 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.245172545 +0000 UTC m=+1.254721266,LastTimestamp:2025-12-08 17:40:24.245172545 +0000 UTC m=+1.254721266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.471871 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e41fb27f147 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.246063431 +0000 UTC m=+1.255612132,LastTimestamp:2025-12-08 17:40:24.246063431 +0000 UTC m=+1.255612132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.475608 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e41fcfeee19 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.276930073 +0000 UTC m=+1.286478774,LastTimestamp:2025-12-08 17:40:24.276930073 +0000 UTC m=+1.286478774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.481043 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e41ffe51e73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.325570163 +0000 UTC m=+1.335118864,LastTimestamp:2025-12-08 17:40:24.325570163 +0000 UTC m=+1.335118864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.486922 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e42002aa9c0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.330127808 +0000 UTC m=+1.339676509,LastTimestamp:2025-12-08 17:40:24.330127808 +0000 UTC m=+1.339676509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.492653 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e4200345ebe openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.330763966 +0000 UTC m=+1.340312667,LastTimestamp:2025-12-08 17:40:24.330763966 +0000 UTC m=+1.340312667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.502504 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e4200bb37cf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.339601359 +0000 UTC m=+1.349150060,LastTimestamp:2025-12-08 17:40:24.339601359 +0000 UTC m=+1.349150060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.506230 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e420b76c48d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.519664781 +0000 UTC m=+1.529213482,LastTimestamp:2025-12-08 17:40:24.519664781 +0000 UTC m=+1.529213482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.506387 5112 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.510807 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e420c53d619 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.534152729 +0000 UTC m=+1.543701430,LastTimestamp:2025-12-08 17:40:24.534152729 +0000 UTC m=+1.543701430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.513699 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e420c60d96d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.535005549 +0000 UTC m=+1.544554250,LastTimestamp:2025-12-08 17:40:24.535005549 +0000 UTC m=+1.544554250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.515870 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e420c6cd877 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.535791735 +0000 UTC m=+1.545340436,LastTimestamp:2025-12-08 17:40:24.535791735 +0000 UTC m=+1.545340436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.522169 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e420d254d81 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.547880321 +0000 UTC m=+1.557429022,LastTimestamp:2025-12-08 17:40:24.547880321 +0000 UTC m=+1.557429022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.526837 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e420dd202d2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.55919893 +0000 UTC m=+1.568747631,LastTimestamp:2025-12-08 17:40:24.55919893 +0000 UTC m=+1.568747631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.533585 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e420ddfb07c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.560095356 +0000 UTC m=+1.569644057,LastTimestamp:2025-12-08 17:40:24.560095356 +0000 UTC m=+1.569644057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.544806 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e420df63a2b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.561572395 +0000 UTC m=+1.571121096,LastTimestamp:2025-12-08 17:40:24.561572395 +0000 UTC m=+1.571121096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.552972 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e420df6a1ca openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.561598922 +0000 UTC m=+1.571147623,LastTimestamp:2025-12-08 17:40:24.561598922 +0000 UTC m=+1.571147623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.559852 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e420dfaf17d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.561881469 +0000 UTC m=+1.571430180,LastTimestamp:2025-12-08 17:40:24.561881469 +0000 UTC m=+1.571430180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.572525 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e420e05208a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.562548874 +0000 UTC m=+1.572097575,LastTimestamp:2025-12-08 17:40:24.562548874 +0000 UTC m=+1.572097575,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.579098 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e420f042504 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.5792617 +0000 UTC m=+1.588810391,LastTimestamp:2025-12-08 17:40:24.5792617 +0000 UTC m=+1.588810391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.584927 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e420f74fd30 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.586657072 +0000 UTC m=+1.596205773,LastTimestamp:2025-12-08 17:40:24.586657072 +0000 UTC m=+1.596205773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.592969 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e421a715050 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.770965584 +0000 UTC m=+1.780514285,LastTimestamp:2025-12-08 17:40:24.770965584 +0000 UTC m=+1.780514285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.598036 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e421caa5154 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.808255828 +0000 UTC m=+1.817804529,LastTimestamp:2025-12-08 17:40:24.808255828 +0000 UTC m=+1.817804529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.604453 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e421cbe8963 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:24.809580899 +0000 UTC m=+1.819129590,LastTimestamp:2025-12-08 17:40:24.809580899 +0000 UTC m=+1.819129590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.613682 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e42291f4465 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.017246821 +0000 UTC m=+2.026795522,LastTimestamp:2025-12-08 17:40:25.017246821 +0000 UTC m=+2.026795522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.618061 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e422a5cda51 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.038060113 +0000 UTC m=+2.047608804,LastTimestamp:2025-12-08 17:40:25.038060113 +0000 UTC m=+2.047608804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.622520 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e42304bfc83 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.137618051 +0000 UTC m=+2.147166752,LastTimestamp:2025-12-08 17:40:25.137618051 +0000 UTC m=+2.147166752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.627894 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e4230560ff6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.13827839 +0000 UTC m=+2.147827091,LastTimestamp:2025-12-08 17:40:25.13827839 +0000 UTC m=+2.147827091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.632860 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4230f6a0a8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.148801192 +0000 UTC m=+2.158349883,LastTimestamp:2025-12-08 17:40:25.148801192 +0000 UTC m=+2.158349883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.640038 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e423104f44b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.149740107 +0000 UTC m=+2.159288798,LastTimestamp:2025-12-08 17:40:25.149740107 +0000 UTC m=+2.159288798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.644119 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e4231189b68 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.151028072 +0000 UTC m=+2.160576763,LastTimestamp:2025-12-08 17:40:25.151028072 +0000 UTC m=+2.160576763,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.648046 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e423139c7fb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.153202171 +0000 UTC m=+2.162750872,LastTimestamp:2025-12-08 17:40:25.153202171 +0000 UTC m=+2.162750872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.652715 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e423d1edac7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.352764103 +0000 UTC m=+2.362312804,LastTimestamp:2025-12-08 17:40:25.352764103 +0000 UTC m=+2.362312804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.659217 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e423d5d6392 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.356862354 +0000 UTC m=+2.366411055,LastTimestamp:2025-12-08 17:40:25.356862354 +0000 UTC m=+2.366411055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.663789 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e423da4cd51 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.361542481 +0000 UTC m=+2.371091182,LastTimestamp:2025-12-08 17:40:25.361542481 +0000 UTC m=+2.371091182,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.667519 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e423e4b131e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.372439326 +0000 UTC m=+2.381988027,LastTimestamp:2025-12-08 17:40:25.372439326 +0000 UTC m=+2.381988027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.671809 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e423e705c56 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.374882902 +0000 UTC m=+2.384431603,LastTimestamp:2025-12-08 17:40:25.374882902 +0000 UTC m=+2.384431603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.675872 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e423e84c3a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.376220068 +0000 UTC m=+2.385768779,LastTimestamp:2025-12-08 17:40:25.376220068 +0000 UTC m=+2.385768779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.677377 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e424a42ef11 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.573232401 +0000 UTC m=+2.582781102,LastTimestamp:2025-12-08 17:40:25.573232401 +0000 UTC m=+2.582781102,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.680582 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e424a4ae00d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.573752845 +0000 UTC m=+2.583301546,LastTimestamp:2025-12-08 17:40:25.573752845 +0000 UTC m=+2.583301546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.684843 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e424aba2821 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.581045793 +0000 UTC m=+2.590594504,LastTimestamp:2025-12-08 17:40:25.581045793 +0000 UTC m=+2.590594504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.688387 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e424ac88dc6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.581989318 +0000 UTC m=+2.591538019,LastTimestamp:2025-12-08 17:40:25.581989318 +0000 UTC m=+2.591538019,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.691808 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e424b302b56 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.588779862 +0000 UTC m=+2.598328563,LastTimestamp:2025-12-08 17:40:25.588779862 +0000 UTC m=+2.598328563,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.695385 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e42541eb0e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.738629348 +0000 UTC m=+2.748178089,LastTimestamp:2025-12-08 17:40:25.738629348 +0000 UTC m=+2.748178089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.698714 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4254c22ab2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.749342898 +0000 UTC m=+2.758891609,LastTimestamp:2025-12-08 17:40:25.749342898 +0000 UTC m=+2.758891609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.702540 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e427b56c6bc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:26.39661638 +0000 UTC m=+3.406165121,LastTimestamp:2025-12-08 17:40:26.39661638 +0000 UTC m=+3.406165121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.706576 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e42897cff5a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:26.634002266 +0000 UTC m=+3.643550977,LastTimestamp:2025-12-08 17:40:26.634002266 +0000 UTC m=+3.643550977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.710072 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e428a1160d5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:26.643726549 +0000 UTC m=+3.653275260,LastTimestamp:2025-12-08 17:40:26.643726549 +0000 UTC m=+3.653275260,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.714346 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e428a23a391 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:26.644923281 +0000 UTC m=+3.654471992,LastTimestamp:2025-12-08 17:40:26.644923281 +0000 UTC m=+3.654471992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.719013 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e4297fdd744 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:26.877327172 +0000 UTC m=+3.886875903,LastTimestamp:2025-12-08 17:40:26.877327172 +0000 UTC m=+3.886875903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.724294 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e4298fcf619 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:26.894046745 +0000 UTC m=+3.903595486,LastTimestamp:2025-12-08 17:40:26.894046745 +0000 UTC m=+3.903595486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.729366 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e4299158a07 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:26.895657479 +0000 UTC m=+3.905206220,LastTimestamp:2025-12-08 17:40:26.895657479 +0000 UTC m=+3.905206220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.736062 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e42a676a3f3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:27.120124915 +0000 UTC m=+4.129673616,LastTimestamp:2025-12-08 17:40:27.120124915 +0000 UTC m=+4.129673616,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.741068 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e42a7070edf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:27.129589471 +0000 UTC m=+4.139138162,LastTimestamp:2025-12-08 17:40:27.129589471 +0000 UTC m=+4.139138162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.746004 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e42a716c8d7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:27.130620119 +0000 UTC m=+4.140168820,LastTimestamp:2025-12-08 17:40:27.130620119 +0000 UTC m=+4.140168820,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.750780 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e42b2470ed8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:27.318333144 +0000 UTC m=+4.327881845,LastTimestamp:2025-12-08 17:40:27.318333144 +0000 UTC m=+4.327881845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.756139 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e42b2cf325f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:27.327255135 +0000 UTC m=+4.336803836,LastTimestamp:2025-12-08 17:40:27.327255135 +0000 UTC m=+4.336803836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: I1208 17:40:41.763066 5112 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 17:40:41 crc kubenswrapper[5112]: I1208 17:40:41.763178 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.764166 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e42b2dd88d2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:27.32819477 +0000 UTC m=+4.337743471,LastTimestamp:2025-12-08 17:40:27.32819477 +0000 UTC m=+4.337743471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.769439 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e42be2ac7cf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:27.517806543 +0000 UTC m=+4.527355254,LastTimestamp:2025-12-08 17:40:27.517806543 +0000 UTC m=+4.527355254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.775239 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e42bf0702e9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:27.532239593 +0000 UTC m=+4.541788304,LastTimestamp:2025-12-08 17:40:27.532239593 +0000 UTC m=+4.541788304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.781343 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 17:40:41 crc kubenswrapper[5112]: &Event{ObjectMeta:{kube-controller-manager-crc.187f4e439b539bb1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 08 17:40:41 crc kubenswrapper[5112]: body: Dec 08 17:40:41 crc kubenswrapper[5112]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:31.228246961 +0000 UTC m=+8.237795672,LastTimestamp:2025-12-08 17:40:31.228246961 +0000 UTC m=+8.237795672,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:41 crc kubenswrapper[5112]: > Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.783344 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e439b5563b8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:31.228363704 +0000 UTC m=+8.237912415,LastTimestamp:2025-12-08 17:40:31.228363704 +0000 UTC m=+8.237912415,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.786501 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:40:41 crc kubenswrapper[5112]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e44c72fdb67 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 17:40:41 crc kubenswrapper[5112]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:40:41 crc kubenswrapper[5112]: Dec 08 17:40:41 crc kubenswrapper[5112]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.259068775 +0000 UTC m=+13.268617476,LastTimestamp:2025-12-08 17:40:36.259068775 +0000 UTC m=+13.268617476,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:41 crc kubenswrapper[5112]: > Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.788201 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e44c7312e5c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.259155548 +0000 UTC m=+13.268704249,LastTimestamp:2025-12-08 17:40:36.259155548 +0000 UTC m=+13.268704249,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.792143 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e44c72fdb67\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:40:41 crc kubenswrapper[5112]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e44c72fdb67 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 17:40:41 crc kubenswrapper[5112]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:40:41 crc kubenswrapper[5112]: Dec 08 17:40:41 crc kubenswrapper[5112]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.259068775 +0000 UTC m=+13.268617476,LastTimestamp:2025-12-08 17:40:36.27110427 +0000 UTC m=+13.280652971,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:41 crc kubenswrapper[5112]: > Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.796197 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e44c7312e5c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e44c7312e5c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.259155548 +0000 UTC m=+13.268704249,LastTimestamp:2025-12-08 17:40:36.271190832 +0000 UTC m=+13.280739533,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.800497 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:40:41 crc kubenswrapper[5112]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e4508acb8f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Dec 08 17:40:41 crc kubenswrapper[5112]: body: [+]ping ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]log ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]etcd ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/priority-and-fairness-filter ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/start-apiextensions-informers ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/start-apiextensions-controllers ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/crd-informer-synced ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/start-system-namespaces-controller ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 08 17:40:41 crc kubenswrapper[5112]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/bootstrap-controller ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/start-kube-aggregator-informers ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/apiservice-registration-controller ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/apiservice-discovery-controller ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]autoregister-completion ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/apiservice-openapi-controller ok Dec 08 17:40:41 crc kubenswrapper[5112]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 08 17:40:41 crc kubenswrapper[5112]: livez check failed Dec 08 17:40:41 crc kubenswrapper[5112]: Dec 08 17:40:41 crc kubenswrapper[5112]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.357770994 +0000 UTC m=+14.367319695,LastTimestamp:2025-12-08 17:40:37.357770994 +0000 UTC m=+14.367319695,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:41 crc kubenswrapper[5112]: > Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.804682 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4508ad53c1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.357810625 +0000 UTC m=+14.367359326,LastTimestamp:2025-12-08 17:40:37.357810625 +0000 UTC m=+14.367359326,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.810774 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:40:41 crc kubenswrapper[5112]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e45bf04f96d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 08 17:40:41 crc kubenswrapper[5112]: body: Dec 08 17:40:41 crc kubenswrapper[5112]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:40.417007981 +0000 UTC m=+17.426556712,LastTimestamp:2025-12-08 17:40:40.417007981 +0000 UTC m=+17.426556712,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:41 crc kubenswrapper[5112]: > Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.815800 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e45bf05df3d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:40.417066813 +0000 UTC m=+17.426615554,LastTimestamp:2025-12-08 17:40:40.417066813 +0000 UTC m=+17.426615554,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.820413 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.187f4e439b539bb1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 17:40:41 crc kubenswrapper[5112]: &Event{ObjectMeta:{kube-controller-manager-crc.187f4e439b539bb1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 08 17:40:41 crc kubenswrapper[5112]: body: Dec 08 17:40:41 crc kubenswrapper[5112]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:31.228246961 +0000 UTC m=+8.237795672,LastTimestamp:2025-12-08 17:40:41.229055319 +0000 UTC m=+18.238604060,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:41 crc kubenswrapper[5112]: > Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.825077 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.187f4e439b5563b8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e439b5563b8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:31.228363704 +0000 UTC m=+8.237912415,LastTimestamp:2025-12-08 17:40:41.229155032 +0000 UTC m=+18.238703763,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.829549 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:40:41 crc kubenswrapper[5112]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e460f413c3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 08 17:40:41 crc kubenswrapper[5112]: body: Dec 08 17:40:41 crc kubenswrapper[5112]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:41.763134522 +0000 UTC m=+18.772683243,LastTimestamp:2025-12-08 17:40:41.763134522 +0000 UTC m=+18.772683243,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:41 crc kubenswrapper[5112]: > Dec 08 17:40:41 crc kubenswrapper[5112]: E1208 17:40:41.834652 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e460f425630 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:41.763206704 +0000 UTC m=+18.772755405,LastTimestamp:2025-12-08 17:40:41.763206704 +0000 UTC m=+18.772755405,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.256320 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:42 crc kubenswrapper[5112]: E1208 17:40:42.329901 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.337894 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.338278 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.339238 5112 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.339665 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.340216 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.340272 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.340287 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:42 crc kubenswrapper[5112]: E1208 17:40:42.340878 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:42 crc kubenswrapper[5112]: E1208 17:40:42.346432 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e45bf04f96d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:40:42 crc kubenswrapper[5112]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e45bf04f96d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 08 17:40:42 crc kubenswrapper[5112]: body: Dec 08 17:40:42 crc kubenswrapper[5112]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:40.417007981 +0000 UTC m=+17.426556712,LastTimestamp:2025-12-08 17:40:42.339397012 +0000 UTC m=+19.348945723,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:42 crc kubenswrapper[5112]: > Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.350041 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:42 crc kubenswrapper[5112]: E1208 17:40:42.351253 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e45bf05df3d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e45bf05df3d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:40.417066813 +0000 UTC m=+17.426615554,LastTimestamp:2025-12-08 17:40:42.339741701 +0000 UTC m=+19.349290422,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.450270 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.452528 5112 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="464a4e10d8ff56b45cb38a25371b700b53ade63b40535c26b880f39ce81f1a0c" exitCode=255 Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.452608 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"464a4e10d8ff56b45cb38a25371b700b53ade63b40535c26b880f39ce81f1a0c"} Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.452751 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.453436 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.453483 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.453497 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:42 crc kubenswrapper[5112]: E1208 17:40:42.453972 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:42 crc kubenswrapper[5112]: I1208 17:40:42.454285 5112 scope.go:117] "RemoveContainer" containerID="464a4e10d8ff56b45cb38a25371b700b53ade63b40535c26b880f39ce81f1a0c" Dec 08 17:40:42 crc kubenswrapper[5112]: E1208 17:40:42.464589 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e424ac88dc6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e424ac88dc6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.581989318 +0000 UTC m=+2.591538019,LastTimestamp:2025-12-08 17:40:42.455930054 +0000 UTC m=+19.465478755,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:42 crc kubenswrapper[5112]: E1208 17:40:42.703697 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e42541eb0e4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e42541eb0e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.738629348 +0000 UTC m=+2.748178089,LastTimestamp:2025-12-08 17:40:42.699308388 +0000 UTC m=+19.708857079,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:42 crc kubenswrapper[5112]: E1208 17:40:42.721316 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e4254c22ab2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4254c22ab2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.749342898 +0000 UTC m=+2.758891609,LastTimestamp:2025-12-08 17:40:42.716732597 +0000 UTC m=+19.726281298,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:43 crc kubenswrapper[5112]: I1208 17:40:43.252050 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:43 crc kubenswrapper[5112]: E1208 17:40:43.378252 5112 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:40:43 crc kubenswrapper[5112]: I1208 17:40:43.455461 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 17:40:43 crc kubenswrapper[5112]: I1208 17:40:43.457227 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7989362525e36cbe9b02a774bf2b229240e409651e928227805c9d15dd255ff9"} Dec 08 17:40:43 crc kubenswrapper[5112]: I1208 17:40:43.457384 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:43 crc kubenswrapper[5112]: I1208 17:40:43.457946 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:43 crc kubenswrapper[5112]: I1208 17:40:43.457988 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:43 crc kubenswrapper[5112]: I1208 17:40:43.458003 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:43 crc kubenswrapper[5112]: E1208 17:40:43.458462 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.250944 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.460953 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.461347 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.462985 5112 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7989362525e36cbe9b02a774bf2b229240e409651e928227805c9d15dd255ff9" exitCode=255 Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.463057 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"7989362525e36cbe9b02a774bf2b229240e409651e928227805c9d15dd255ff9"} Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.463124 5112 scope.go:117] "RemoveContainer" containerID="464a4e10d8ff56b45cb38a25371b700b53ade63b40535c26b880f39ce81f1a0c" Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.463274 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.464052 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.464188 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.464219 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:44 crc kubenswrapper[5112]: E1208 17:40:44.464940 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.465593 5112 scope.go:117] "RemoveContainer" containerID="7989362525e36cbe9b02a774bf2b229240e409651e928227805c9d15dd255ff9" Dec 08 17:40:44 crc kubenswrapper[5112]: E1208 17:40:44.466041 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:40:44 crc kubenswrapper[5112]: E1208 17:40:44.474050 5112 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e46b05ab277 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:44.465934967 +0000 UTC m=+21.475483708,LastTimestamp:2025-12-08 17:40:44.465934967 +0000 UTC m=+21.475483708,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:44 crc kubenswrapper[5112]: E1208 17:40:44.524586 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.707495 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.708568 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.708633 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.708674 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:44 crc kubenswrapper[5112]: I1208 17:40:44.708710 5112 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:44 crc kubenswrapper[5112]: E1208 17:40:44.718976 5112 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:40:45 crc kubenswrapper[5112]: I1208 17:40:45.253471 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:45 crc kubenswrapper[5112]: E1208 17:40:45.328632 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:40:45 crc kubenswrapper[5112]: I1208 17:40:45.473927 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:40:45 crc kubenswrapper[5112]: I1208 17:40:45.476858 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:45 crc kubenswrapper[5112]: I1208 17:40:45.477637 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:45 crc kubenswrapper[5112]: I1208 17:40:45.477698 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:45 crc kubenswrapper[5112]: I1208 17:40:45.477719 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:45 crc kubenswrapper[5112]: E1208 17:40:45.478381 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:45 crc kubenswrapper[5112]: I1208 17:40:45.478872 5112 scope.go:117] "RemoveContainer" containerID="7989362525e36cbe9b02a774bf2b229240e409651e928227805c9d15dd255ff9" Dec 08 17:40:45 crc kubenswrapper[5112]: E1208 17:40:45.479233 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:40:45 crc kubenswrapper[5112]: E1208 17:40:45.488714 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e46b05ab277\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e46b05ab277 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:44.465934967 +0000 UTC m=+21.475483708,LastTimestamp:2025-12-08 17:40:45.479177821 +0000 UTC m=+22.488726562,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:45 crc kubenswrapper[5112]: E1208 17:40:45.886391 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:40:46 crc kubenswrapper[5112]: I1208 17:40:46.251574 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:47 crc kubenswrapper[5112]: I1208 17:40:47.254401 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:47 crc kubenswrapper[5112]: E1208 17:40:47.485612 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:40:48 crc kubenswrapper[5112]: I1208 17:40:48.235786 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:48 crc kubenswrapper[5112]: I1208 17:40:48.236110 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:48 crc kubenswrapper[5112]: I1208 17:40:48.237229 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:48 crc kubenswrapper[5112]: I1208 17:40:48.237394 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:48 crc kubenswrapper[5112]: I1208 17:40:48.237513 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:48 crc kubenswrapper[5112]: E1208 17:40:48.238124 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:48 crc kubenswrapper[5112]: I1208 17:40:48.243755 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:48 crc kubenswrapper[5112]: I1208 17:40:48.254230 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:48 crc kubenswrapper[5112]: I1208 17:40:48.484429 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:48 crc kubenswrapper[5112]: I1208 17:40:48.485113 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:48 crc kubenswrapper[5112]: I1208 17:40:48.485199 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:48 crc kubenswrapper[5112]: I1208 17:40:48.485226 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:48 crc kubenswrapper[5112]: E1208 17:40:48.485989 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:49 crc kubenswrapper[5112]: I1208 17:40:49.254357 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:49 crc kubenswrapper[5112]: E1208 17:40:49.926126 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:40:50 crc kubenswrapper[5112]: I1208 17:40:50.253923 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:51 crc kubenswrapper[5112]: I1208 17:40:51.120014 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:51 crc kubenswrapper[5112]: I1208 17:40:51.121069 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:51 crc kubenswrapper[5112]: I1208 17:40:51.121192 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:51 crc kubenswrapper[5112]: I1208 17:40:51.121219 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:51 crc kubenswrapper[5112]: I1208 17:40:51.121266 5112 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:51 crc kubenswrapper[5112]: E1208 17:40:51.132625 5112 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:40:51 crc kubenswrapper[5112]: I1208 17:40:51.252683 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:51 crc kubenswrapper[5112]: I1208 17:40:51.762337 5112 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:51 crc kubenswrapper[5112]: I1208 17:40:51.762611 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:51 crc kubenswrapper[5112]: I1208 17:40:51.763517 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:51 crc kubenswrapper[5112]: I1208 17:40:51.763568 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:51 crc kubenswrapper[5112]: I1208 17:40:51.763588 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:51 crc kubenswrapper[5112]: E1208 17:40:51.764426 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:51 crc kubenswrapper[5112]: I1208 17:40:51.764857 5112 scope.go:117] "RemoveContainer" containerID="7989362525e36cbe9b02a774bf2b229240e409651e928227805c9d15dd255ff9" Dec 08 17:40:51 crc kubenswrapper[5112]: E1208 17:40:51.765240 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:40:51 crc kubenswrapper[5112]: E1208 17:40:51.770755 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e46b05ab277\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e46b05ab277 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:44.465934967 +0000 UTC m=+21.475483708,LastTimestamp:2025-12-08 17:40:51.765181683 +0000 UTC m=+28.774730414,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:52 crc kubenswrapper[5112]: I1208 17:40:52.256352 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:52 crc kubenswrapper[5112]: E1208 17:40:52.556560 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:40:52 crc kubenswrapper[5112]: E1208 17:40:52.895744 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:40:53 crc kubenswrapper[5112]: I1208 17:40:53.248929 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:53 crc kubenswrapper[5112]: E1208 17:40:53.378848 5112 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:40:53 crc kubenswrapper[5112]: I1208 17:40:53.457576 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:53 crc kubenswrapper[5112]: I1208 17:40:53.457915 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:53 crc kubenswrapper[5112]: I1208 17:40:53.459201 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:53 crc kubenswrapper[5112]: I1208 17:40:53.459270 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:53 crc kubenswrapper[5112]: I1208 17:40:53.459291 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:53 crc kubenswrapper[5112]: E1208 17:40:53.459896 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:53 crc kubenswrapper[5112]: I1208 17:40:53.460295 5112 scope.go:117] "RemoveContainer" containerID="7989362525e36cbe9b02a774bf2b229240e409651e928227805c9d15dd255ff9" Dec 08 17:40:53 crc kubenswrapper[5112]: E1208 17:40:53.460642 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:40:53 crc kubenswrapper[5112]: E1208 17:40:53.465593 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e46b05ab277\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e46b05ab277 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:44.465934967 +0000 UTC m=+21.475483708,LastTimestamp:2025-12-08 17:40:53.460573352 +0000 UTC m=+30.470122093,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5112]: I1208 17:40:54.255045 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:54 crc kubenswrapper[5112]: E1208 17:40:54.789187 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:40:55 crc kubenswrapper[5112]: E1208 17:40:55.039532 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:40:55 crc kubenswrapper[5112]: I1208 17:40:55.255458 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:56 crc kubenswrapper[5112]: I1208 17:40:56.255778 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:57 crc kubenswrapper[5112]: I1208 17:40:57.253288 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:58 crc kubenswrapper[5112]: I1208 17:40:58.133059 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:58 crc kubenswrapper[5112]: I1208 17:40:58.134309 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:58 crc kubenswrapper[5112]: I1208 17:40:58.134354 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:58 crc kubenswrapper[5112]: I1208 17:40:58.134368 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:58 crc kubenswrapper[5112]: I1208 17:40:58.134397 5112 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:58 crc kubenswrapper[5112]: E1208 17:40:58.147623 5112 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:40:58 crc kubenswrapper[5112]: I1208 17:40:58.252472 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:59 crc kubenswrapper[5112]: I1208 17:40:59.253959 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:59 crc kubenswrapper[5112]: E1208 17:40:59.904717 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:41:00 crc kubenswrapper[5112]: I1208 17:41:00.255556 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:01 crc kubenswrapper[5112]: I1208 17:41:01.254177 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:02 crc kubenswrapper[5112]: I1208 17:41:02.254907 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:03 crc kubenswrapper[5112]: I1208 17:41:03.255700 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:03 crc kubenswrapper[5112]: E1208 17:41:03.379808 5112 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:41:04 crc kubenswrapper[5112]: I1208 17:41:04.255502 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:04 crc kubenswrapper[5112]: I1208 17:41:04.316558 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:04 crc kubenswrapper[5112]: I1208 17:41:04.317967 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:04 crc kubenswrapper[5112]: I1208 17:41:04.318069 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:04 crc kubenswrapper[5112]: I1208 17:41:04.318144 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:04 crc kubenswrapper[5112]: E1208 17:41:04.318931 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:04 crc kubenswrapper[5112]: I1208 17:41:04.319433 5112 scope.go:117] "RemoveContainer" containerID="7989362525e36cbe9b02a774bf2b229240e409651e928227805c9d15dd255ff9" Dec 08 17:41:04 crc kubenswrapper[5112]: E1208 17:41:04.330839 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e424ac88dc6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e424ac88dc6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.581989318 +0000 UTC m=+2.591538019,LastTimestamp:2025-12-08 17:41:04.321226465 +0000 UTC m=+41.330775196,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:04 crc kubenswrapper[5112]: E1208 17:41:04.556925 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e42541eb0e4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e42541eb0e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.738629348 +0000 UTC m=+2.748178089,LastTimestamp:2025-12-08 17:41:04.549357398 +0000 UTC m=+41.558906099,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:04 crc kubenswrapper[5112]: E1208 17:41:04.566469 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e4254c22ab2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4254c22ab2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:25.749342898 +0000 UTC m=+2.758891609,LastTimestamp:2025-12-08 17:41:04.559504104 +0000 UTC m=+41.569052805,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:05 crc kubenswrapper[5112]: I1208 17:41:05.148390 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:05 crc kubenswrapper[5112]: I1208 17:41:05.149846 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:05 crc kubenswrapper[5112]: I1208 17:41:05.150122 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:05 crc kubenswrapper[5112]: I1208 17:41:05.150292 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:05 crc kubenswrapper[5112]: I1208 17:41:05.150504 5112 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:41:05 crc kubenswrapper[5112]: E1208 17:41:05.165811 5112 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:41:05 crc kubenswrapper[5112]: I1208 17:41:05.252614 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:05 crc kubenswrapper[5112]: I1208 17:41:05.534484 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:41:05 crc kubenswrapper[5112]: I1208 17:41:05.538403 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"cc85bd1bdc8afabb9fe5081af316b9468abdb18d1961bf429e4d5e0a6d764e73"} Dec 08 17:41:05 crc kubenswrapper[5112]: I1208 17:41:05.538921 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:05 crc kubenswrapper[5112]: I1208 17:41:05.539884 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:05 crc kubenswrapper[5112]: I1208 17:41:05.540010 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:05 crc kubenswrapper[5112]: I1208 17:41:05.540037 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:05 crc kubenswrapper[5112]: E1208 17:41:05.540679 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:06 crc kubenswrapper[5112]: I1208 17:41:06.254348 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:06 crc kubenswrapper[5112]: I1208 17:41:06.542181 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:41:06 crc kubenswrapper[5112]: I1208 17:41:06.543585 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:41:06 crc kubenswrapper[5112]: I1208 17:41:06.552191 5112 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="cc85bd1bdc8afabb9fe5081af316b9468abdb18d1961bf429e4d5e0a6d764e73" exitCode=255 Dec 08 17:41:06 crc kubenswrapper[5112]: I1208 17:41:06.552275 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"cc85bd1bdc8afabb9fe5081af316b9468abdb18d1961bf429e4d5e0a6d764e73"} Dec 08 17:41:06 crc kubenswrapper[5112]: I1208 17:41:06.552342 5112 scope.go:117] "RemoveContainer" containerID="7989362525e36cbe9b02a774bf2b229240e409651e928227805c9d15dd255ff9" Dec 08 17:41:06 crc kubenswrapper[5112]: I1208 17:41:06.552873 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:06 crc kubenswrapper[5112]: I1208 17:41:06.571133 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:06 crc kubenswrapper[5112]: I1208 17:41:06.571224 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:06 crc kubenswrapper[5112]: I1208 17:41:06.571240 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:06 crc kubenswrapper[5112]: E1208 17:41:06.571731 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:06 crc kubenswrapper[5112]: I1208 17:41:06.572213 5112 scope.go:117] "RemoveContainer" containerID="cc85bd1bdc8afabb9fe5081af316b9468abdb18d1961bf429e4d5e0a6d764e73" Dec 08 17:41:06 crc kubenswrapper[5112]: E1208 17:41:06.572475 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:06 crc kubenswrapper[5112]: E1208 17:41:06.580695 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e46b05ab277\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e46b05ab277 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:44.465934967 +0000 UTC m=+21.475483708,LastTimestamp:2025-12-08 17:41:06.57244455 +0000 UTC m=+43.581993251,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:06 crc kubenswrapper[5112]: E1208 17:41:06.913281 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:41:07 crc kubenswrapper[5112]: I1208 17:41:07.255602 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:07 crc kubenswrapper[5112]: I1208 17:41:07.558297 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:41:08 crc kubenswrapper[5112]: I1208 17:41:08.254580 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:09 crc kubenswrapper[5112]: I1208 17:41:09.254757 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:09 crc kubenswrapper[5112]: E1208 17:41:09.893036 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:41:09 crc kubenswrapper[5112]: E1208 17:41:09.907402 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:41:10 crc kubenswrapper[5112]: I1208 17:41:10.252547 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:10 crc kubenswrapper[5112]: E1208 17:41:10.701525 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:41:11 crc kubenswrapper[5112]: E1208 17:41:11.140377 5112 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:41:11 crc kubenswrapper[5112]: I1208 17:41:11.256862 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:11 crc kubenswrapper[5112]: I1208 17:41:11.763180 5112 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:41:11 crc kubenswrapper[5112]: I1208 17:41:11.763548 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:11 crc kubenswrapper[5112]: I1208 17:41:11.764871 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:11 crc kubenswrapper[5112]: I1208 17:41:11.764951 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:11 crc kubenswrapper[5112]: I1208 17:41:11.764971 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:11 crc kubenswrapper[5112]: E1208 17:41:11.765645 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:11 crc kubenswrapper[5112]: I1208 17:41:11.766078 5112 scope.go:117] "RemoveContainer" containerID="cc85bd1bdc8afabb9fe5081af316b9468abdb18d1961bf429e4d5e0a6d764e73" Dec 08 17:41:11 crc kubenswrapper[5112]: E1208 17:41:11.766504 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:11 crc kubenswrapper[5112]: E1208 17:41:11.774504 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e46b05ab277\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e46b05ab277 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:44.465934967 +0000 UTC m=+21.475483708,LastTimestamp:2025-12-08 17:41:11.766440707 +0000 UTC m=+48.775989438,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:12 crc kubenswrapper[5112]: I1208 17:41:12.166836 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:12 crc kubenswrapper[5112]: I1208 17:41:12.168014 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:12 crc kubenswrapper[5112]: I1208 17:41:12.168109 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:12 crc kubenswrapper[5112]: I1208 17:41:12.168129 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:12 crc kubenswrapper[5112]: I1208 17:41:12.168161 5112 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:41:12 crc kubenswrapper[5112]: E1208 17:41:12.182741 5112 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:41:12 crc kubenswrapper[5112]: I1208 17:41:12.255459 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:13 crc kubenswrapper[5112]: I1208 17:41:13.256333 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:13 crc kubenswrapper[5112]: E1208 17:41:13.381175 5112 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:41:13 crc kubenswrapper[5112]: E1208 17:41:13.916344 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:41:14 crc kubenswrapper[5112]: I1208 17:41:14.257900 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:15 crc kubenswrapper[5112]: I1208 17:41:15.255887 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:15 crc kubenswrapper[5112]: I1208 17:41:15.540060 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:41:15 crc kubenswrapper[5112]: I1208 17:41:15.540312 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:15 crc kubenswrapper[5112]: I1208 17:41:15.541345 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:15 crc kubenswrapper[5112]: I1208 17:41:15.541408 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:15 crc kubenswrapper[5112]: I1208 17:41:15.541427 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:15 crc kubenswrapper[5112]: E1208 17:41:15.542147 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:15 crc kubenswrapper[5112]: I1208 17:41:15.542572 5112 scope.go:117] "RemoveContainer" containerID="cc85bd1bdc8afabb9fe5081af316b9468abdb18d1961bf429e4d5e0a6d764e73" Dec 08 17:41:15 crc kubenswrapper[5112]: E1208 17:41:15.542987 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:15 crc kubenswrapper[5112]: E1208 17:41:15.553319 5112 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e46b05ab277\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e46b05ab277 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:44.465934967 +0000 UTC m=+21.475483708,LastTimestamp:2025-12-08 17:41:15.542931076 +0000 UTC m=+52.552479817,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:16 crc kubenswrapper[5112]: I1208 17:41:16.255754 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:16 crc kubenswrapper[5112]: I1208 17:41:16.399326 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:41:16 crc kubenswrapper[5112]: I1208 17:41:16.399554 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:16 crc kubenswrapper[5112]: I1208 17:41:16.401655 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:16 crc kubenswrapper[5112]: I1208 17:41:16.402636 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:16 crc kubenswrapper[5112]: I1208 17:41:16.403027 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:16 crc kubenswrapper[5112]: E1208 17:41:16.403938 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:17 crc kubenswrapper[5112]: I1208 17:41:17.254739 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:18 crc kubenswrapper[5112]: I1208 17:41:18.254455 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:19 crc kubenswrapper[5112]: I1208 17:41:19.183217 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:19 crc kubenswrapper[5112]: I1208 17:41:19.184819 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:19 crc kubenswrapper[5112]: I1208 17:41:19.184892 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:19 crc kubenswrapper[5112]: I1208 17:41:19.184938 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:19 crc kubenswrapper[5112]: I1208 17:41:19.184983 5112 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:41:19 crc kubenswrapper[5112]: E1208 17:41:19.202434 5112 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:41:19 crc kubenswrapper[5112]: I1208 17:41:19.255347 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:20 crc kubenswrapper[5112]: I1208 17:41:20.255282 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:20 crc kubenswrapper[5112]: E1208 17:41:20.923414 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:41:21 crc kubenswrapper[5112]: I1208 17:41:21.255196 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:22 crc kubenswrapper[5112]: I1208 17:41:22.253666 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:23 crc kubenswrapper[5112]: I1208 17:41:23.254644 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:23 crc kubenswrapper[5112]: E1208 17:41:23.382433 5112 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:41:24 crc kubenswrapper[5112]: I1208 17:41:24.252212 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:25 crc kubenswrapper[5112]: I1208 17:41:25.252908 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:26 crc kubenswrapper[5112]: I1208 17:41:26.202876 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:26 crc kubenswrapper[5112]: I1208 17:41:26.204017 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:26 crc kubenswrapper[5112]: I1208 17:41:26.204189 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:26 crc kubenswrapper[5112]: I1208 17:41:26.204216 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:26 crc kubenswrapper[5112]: I1208 17:41:26.204258 5112 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:41:26 crc kubenswrapper[5112]: E1208 17:41:26.226278 5112 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:41:26 crc kubenswrapper[5112]: I1208 17:41:26.255948 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:27 crc kubenswrapper[5112]: I1208 17:41:27.251747 5112 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:27 crc kubenswrapper[5112]: I1208 17:41:27.834888 5112 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-cwjd9" Dec 08 17:41:27 crc kubenswrapper[5112]: I1208 17:41:27.843519 5112 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-cwjd9" Dec 08 17:41:27 crc kubenswrapper[5112]: I1208 17:41:27.933721 5112 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 08 17:41:28 crc kubenswrapper[5112]: I1208 17:41:28.170161 5112 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 08 17:41:28 crc kubenswrapper[5112]: I1208 17:41:28.845735 5112 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-07 17:36:27 +0000 UTC" deadline="2025-12-30 08:08:57.685160733 +0000 UTC" Dec 08 17:41:28 crc kubenswrapper[5112]: I1208 17:41:28.845789 5112 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="518h27m28.839376962s" Dec 08 17:41:30 crc kubenswrapper[5112]: I1208 17:41:30.316320 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:30 crc kubenswrapper[5112]: I1208 17:41:30.317514 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:30 crc kubenswrapper[5112]: I1208 17:41:30.317603 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:30 crc kubenswrapper[5112]: I1208 17:41:30.317618 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:30 crc kubenswrapper[5112]: E1208 17:41:30.318212 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:30 crc kubenswrapper[5112]: I1208 17:41:30.318518 5112 scope.go:117] "RemoveContainer" containerID="cc85bd1bdc8afabb9fe5081af316b9468abdb18d1961bf429e4d5e0a6d764e73" Dec 08 17:41:30 crc kubenswrapper[5112]: I1208 17:41:30.625025 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:41:30 crc kubenswrapper[5112]: I1208 17:41:30.627825 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59"} Dec 08 17:41:30 crc kubenswrapper[5112]: I1208 17:41:30.628179 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:30 crc kubenswrapper[5112]: I1208 17:41:30.629072 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:30 crc kubenswrapper[5112]: I1208 17:41:30.629132 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:30 crc kubenswrapper[5112]: I1208 17:41:30.629149 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:30 crc kubenswrapper[5112]: E1208 17:41:30.629708 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:32 crc kubenswrapper[5112]: I1208 17:41:32.633981 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:41:32 crc kubenswrapper[5112]: I1208 17:41:32.634419 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:41:32 crc kubenswrapper[5112]: I1208 17:41:32.635718 5112 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59" exitCode=255 Dec 08 17:41:32 crc kubenswrapper[5112]: I1208 17:41:32.635781 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59"} Dec 08 17:41:32 crc kubenswrapper[5112]: I1208 17:41:32.635830 5112 scope.go:117] "RemoveContainer" containerID="cc85bd1bdc8afabb9fe5081af316b9468abdb18d1961bf429e4d5e0a6d764e73" Dec 08 17:41:32 crc kubenswrapper[5112]: I1208 17:41:32.636100 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:32 crc kubenswrapper[5112]: I1208 17:41:32.636964 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:32 crc kubenswrapper[5112]: I1208 17:41:32.637025 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:32 crc kubenswrapper[5112]: I1208 17:41:32.637039 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:32 crc kubenswrapper[5112]: E1208 17:41:32.637567 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:32 crc kubenswrapper[5112]: I1208 17:41:32.637883 5112 scope.go:117] "RemoveContainer" containerID="7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59" Dec 08 17:41:32 crc kubenswrapper[5112]: E1208 17:41:32.638152 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.227304 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.228740 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.228806 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.228823 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.228972 5112 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.238600 5112 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.238930 5112 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.238955 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.241837 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.241879 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.241891 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.241908 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.241921 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:33Z","lastTransitionTime":"2025-12-08T17:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.259036 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.272027 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.272069 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.272099 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.272114 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.272124 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:33Z","lastTransitionTime":"2025-12-08T17:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.284688 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.294449 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.294495 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.294504 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.294520 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.294529 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:33Z","lastTransitionTime":"2025-12-08T17:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.304943 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.312672 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.315402 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.315416 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.315430 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.315440 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:33Z","lastTransitionTime":"2025-12-08T17:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.315689 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.316672 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.316807 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.316840 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.317603 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.324892 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.325040 5112 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.325068 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.383277 5112 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.425644 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.526481 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.627158 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:33 crc kubenswrapper[5112]: I1208 17:41:33.639482 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.727860 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.828452 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:33 crc kubenswrapper[5112]: E1208 17:41:33.929008 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:34 crc kubenswrapper[5112]: E1208 17:41:34.029619 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:34 crc kubenswrapper[5112]: E1208 17:41:34.130529 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:34 crc kubenswrapper[5112]: E1208 17:41:34.231007 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:34 crc kubenswrapper[5112]: E1208 17:41:34.331385 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:34 crc kubenswrapper[5112]: E1208 17:41:34.432536 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:34 crc kubenswrapper[5112]: E1208 17:41:34.533563 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:34 crc kubenswrapper[5112]: E1208 17:41:34.633989 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:34 crc kubenswrapper[5112]: E1208 17:41:34.735276 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:34 crc kubenswrapper[5112]: E1208 17:41:34.835868 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:34 crc kubenswrapper[5112]: E1208 17:41:34.936010 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:35 crc kubenswrapper[5112]: E1208 17:41:35.037139 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:35 crc kubenswrapper[5112]: E1208 17:41:35.138324 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:35 crc kubenswrapper[5112]: E1208 17:41:35.238963 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:35 crc kubenswrapper[5112]: E1208 17:41:35.339453 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:35 crc kubenswrapper[5112]: E1208 17:41:35.440587 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:35 crc kubenswrapper[5112]: E1208 17:41:35.541251 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:35 crc kubenswrapper[5112]: E1208 17:41:35.642235 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:35 crc kubenswrapper[5112]: E1208 17:41:35.742785 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:35 crc kubenswrapper[5112]: E1208 17:41:35.843790 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:35 crc kubenswrapper[5112]: E1208 17:41:35.944782 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:36 crc kubenswrapper[5112]: E1208 17:41:36.045166 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:36 crc kubenswrapper[5112]: E1208 17:41:36.145400 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:36 crc kubenswrapper[5112]: E1208 17:41:36.246462 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:36 crc kubenswrapper[5112]: E1208 17:41:36.347883 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:36 crc kubenswrapper[5112]: E1208 17:41:36.448442 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:36 crc kubenswrapper[5112]: E1208 17:41:36.549344 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:36 crc kubenswrapper[5112]: E1208 17:41:36.649514 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:36 crc kubenswrapper[5112]: E1208 17:41:36.749613 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:36 crc kubenswrapper[5112]: E1208 17:41:36.850470 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:36 crc kubenswrapper[5112]: E1208 17:41:36.950602 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:37 crc kubenswrapper[5112]: E1208 17:41:37.051647 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:37 crc kubenswrapper[5112]: E1208 17:41:37.151793 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:37 crc kubenswrapper[5112]: E1208 17:41:37.252826 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:37 crc kubenswrapper[5112]: E1208 17:41:37.353318 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:37 crc kubenswrapper[5112]: E1208 17:41:37.453509 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:37 crc kubenswrapper[5112]: E1208 17:41:37.553973 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:37 crc kubenswrapper[5112]: E1208 17:41:37.654587 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:37 crc kubenswrapper[5112]: E1208 17:41:37.755281 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:37 crc kubenswrapper[5112]: E1208 17:41:37.855502 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:37 crc kubenswrapper[5112]: E1208 17:41:37.956014 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:38 crc kubenswrapper[5112]: E1208 17:41:38.057186 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:38 crc kubenswrapper[5112]: E1208 17:41:38.157410 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:38 crc kubenswrapper[5112]: E1208 17:41:38.258483 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:38 crc kubenswrapper[5112]: E1208 17:41:38.359201 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:38 crc kubenswrapper[5112]: E1208 17:41:38.460305 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:38 crc kubenswrapper[5112]: E1208 17:41:38.561576 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:38 crc kubenswrapper[5112]: E1208 17:41:38.662384 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:38 crc kubenswrapper[5112]: E1208 17:41:38.763648 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:38 crc kubenswrapper[5112]: E1208 17:41:38.864275 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:38 crc kubenswrapper[5112]: E1208 17:41:38.965299 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:39 crc kubenswrapper[5112]: E1208 17:41:39.065860 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:39 crc kubenswrapper[5112]: E1208 17:41:39.166587 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:39 crc kubenswrapper[5112]: E1208 17:41:39.267379 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:39 crc kubenswrapper[5112]: E1208 17:41:39.368234 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:39 crc kubenswrapper[5112]: I1208 17:41:39.369965 5112 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:41:39 crc kubenswrapper[5112]: E1208 17:41:39.468654 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:39 crc kubenswrapper[5112]: E1208 17:41:39.568971 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:39 crc kubenswrapper[5112]: E1208 17:41:39.669848 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:39 crc kubenswrapper[5112]: E1208 17:41:39.770993 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:39 crc kubenswrapper[5112]: E1208 17:41:39.871731 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:39 crc kubenswrapper[5112]: E1208 17:41:39.972428 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:40 crc kubenswrapper[5112]: E1208 17:41:40.072901 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:40 crc kubenswrapper[5112]: E1208 17:41:40.173116 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:40 crc kubenswrapper[5112]: E1208 17:41:40.274135 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:40 crc kubenswrapper[5112]: E1208 17:41:40.375104 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:40 crc kubenswrapper[5112]: E1208 17:41:40.476162 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:40 crc kubenswrapper[5112]: E1208 17:41:40.577157 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:40 crc kubenswrapper[5112]: I1208 17:41:40.629304 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:41:40 crc kubenswrapper[5112]: I1208 17:41:40.629575 5112 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:40 crc kubenswrapper[5112]: I1208 17:41:40.630863 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:40 crc kubenswrapper[5112]: I1208 17:41:40.631061 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:40 crc kubenswrapper[5112]: I1208 17:41:40.631293 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:40 crc kubenswrapper[5112]: E1208 17:41:40.632336 5112 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:40 crc kubenswrapper[5112]: I1208 17:41:40.632870 5112 scope.go:117] "RemoveContainer" containerID="7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59" Dec 08 17:41:40 crc kubenswrapper[5112]: E1208 17:41:40.633340 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:40 crc kubenswrapper[5112]: E1208 17:41:40.678607 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:40 crc kubenswrapper[5112]: E1208 17:41:40.779391 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:40 crc kubenswrapper[5112]: E1208 17:41:40.879901 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:40 crc kubenswrapper[5112]: E1208 17:41:40.980142 5112 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.057566 5112 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.061836 5112 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.075177 5112 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.082186 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.082232 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.082246 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.082264 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.082276 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:41Z","lastTransitionTime":"2025-12-08T17:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.175915 5112 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.184772 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.184826 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.184841 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.184860 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.184873 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:41Z","lastTransitionTime":"2025-12-08T17:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.277400 5112 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.286694 5112 apiserver.go:52] "Watching apiserver" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.287732 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.287808 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.287827 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.287852 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.287871 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:41Z","lastTransitionTime":"2025-12-08T17:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.293449 5112 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.294072 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/iptables-alerter-5jnd7","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-machine-config-operator/machine-config-daemon-s6wzf","openshift-multus/network-metrics-daemon-7jq8h","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf","openshift-dns/node-resolver-rsc28","openshift-image-registry/node-ca-4hrlr","openshift-multus/multus-additional-cni-plugins-9xjh5","openshift-multus/multus-kvv4v","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-node-ng27z","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5"] Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.295740 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.297043 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.297213 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.299482 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.299524 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.299525 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.299732 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.299790 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.300032 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.302507 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.303877 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.304661 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.304773 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.305478 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.307461 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.308724 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.310073 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.311332 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.329105 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rsc28" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.331552 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.332269 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.332662 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.334450 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.335923 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.336460 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.337028 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.337715 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.338175 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.338210 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.338192 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.341465 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4hrlr" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.342891 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.343321 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.343455 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.344177 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.345215 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.346875 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.348207 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.348559 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.348857 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.348928 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.349147 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.349248 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.350697 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.351197 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.352430 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.352566 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.352902 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.365643 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.374931 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.376553 5112 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.386165 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.389230 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.390317 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.390398 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.390410 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.390427 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.390438 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:41Z","lastTransitionTime":"2025-12-08T17:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.391156 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.391453 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.392372 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.392471 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.392524 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.398044 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.409501 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.421246 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.427400 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6-serviceca\") pod \"node-ca-4hrlr\" (UID: \"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\") " pod="openshift-image-registry/node-ca-4hrlr" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.427462 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.427489 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-run-k8s-cni-cncf-io\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.427576 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-var-lib-kubelet\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.427934 5112 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.428158 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:41.92807296 +0000 UTC m=+78.937621661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428155 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428238 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/472d4dbe-4674-43ba-98da-98502eccb960-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-b7fmf\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428284 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/575dcc54-1cfa-45ab-8c22-087fcf27f142-system-cni-dir\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428319 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428359 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-cnibin\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428384 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428409 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95e46da0-94bb-4d22-804b-b3018984cdac-proxy-tls\") pod \"machine-config-daemon-s6wzf\" (UID: \"95e46da0-94bb-4d22-804b-b3018984cdac\") " pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428445 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b-tmp-dir\") pod \"node-resolver-rsc28\" (UID: \"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\") " pod="openshift-dns/node-resolver-rsc28" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428475 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-etc-kubernetes\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428514 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428535 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/472d4dbe-4674-43ba-98da-98502eccb960-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-b7fmf\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428554 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/575dcc54-1cfa-45ab-8c22-087fcf27f142-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428573 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/575dcc54-1cfa-45ab-8c22-087fcf27f142-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428639 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c98z\" (UniqueName: \"kubernetes.io/projected/575dcc54-1cfa-45ab-8c22-087fcf27f142-kube-api-access-5c98z\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428696 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428722 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-os-release\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428771 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/288ee203-be3f-4176-90b2-7d95ee47aee8-multus-daemon-config\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428797 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mv6w\" (UniqueName: \"kubernetes.io/projected/3c4fb553-8514-4194-847c-96d40f8b41e3-kube-api-access-7mv6w\") pod \"network-metrics-daemon-7jq8h\" (UID: \"3c4fb553-8514-4194-847c-96d40f8b41e3\") " pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.429054 5112 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.429219 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:41.929206791 +0000 UTC m=+78.938755502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.428816 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv8p6\" (UniqueName: \"kubernetes.io/projected/472d4dbe-4674-43ba-98da-98502eccb960-kube-api-access-sv8p6\") pod \"ovnkube-control-plane-57b78d8988-b7fmf\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.429323 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.429421 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.429520 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/472d4dbe-4674-43ba-98da-98502eccb960-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-b7fmf\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.429585 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/575dcc54-1cfa-45ab-8c22-087fcf27f142-cnibin\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.429685 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.430121 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.430691 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88g7z\" (UniqueName: \"kubernetes.io/projected/5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6-kube-api-access-88g7z\") pod \"node-ca-4hrlr\" (UID: \"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\") " pod="openshift-image-registry/node-ca-4hrlr" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.430827 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/575dcc54-1cfa-45ab-8c22-087fcf27f142-os-release\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.430870 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-run-netns\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.430909 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.430977 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6-host\") pod \"node-ca-4hrlr\" (UID: \"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\") " pod="openshift-image-registry/node-ca-4hrlr" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.431008 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs\") pod \"network-metrics-daemon-7jq8h\" (UID: \"3c4fb553-8514-4194-847c-96d40f8b41e3\") " pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.431039 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/95e46da0-94bb-4d22-804b-b3018984cdac-rootfs\") pod \"machine-config-daemon-s6wzf\" (UID: \"95e46da0-94bb-4d22-804b-b3018984cdac\") " pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.431063 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/575dcc54-1cfa-45ab-8c22-087fcf27f142-cni-binary-copy\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.431719 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.432042 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-multus-socket-dir-parent\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.432092 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-run-multus-certs\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.432137 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbcf4\" (UniqueName: \"kubernetes.io/projected/288ee203-be3f-4176-90b2-7d95ee47aee8-kube-api-access-gbcf4\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.432164 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-system-cni-dir\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.432193 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-multus-cni-dir\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.432219 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/288ee203-be3f-4176-90b2-7d95ee47aee8-cni-binary-copy\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.432808 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.433248 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-var-lib-cni-bin\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.433303 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95e46da0-94bb-4d22-804b-b3018984cdac-mcd-auth-proxy-config\") pod \"machine-config-daemon-s6wzf\" (UID: \"95e46da0-94bb-4d22-804b-b3018984cdac\") " pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.433337 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.433365 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-multus-conf-dir\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.433394 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.433426 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.433495 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56lk7\" (UniqueName: \"kubernetes.io/projected/95e46da0-94bb-4d22-804b-b3018984cdac-kube-api-access-56lk7\") pod \"machine-config-daemon-s6wzf\" (UID: \"95e46da0-94bb-4d22-804b-b3018984cdac\") " pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.433523 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pm48\" (UniqueName: \"kubernetes.io/projected/a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b-kube-api-access-4pm48\") pod \"node-resolver-rsc28\" (UID: \"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\") " pod="openshift-dns/node-resolver-rsc28" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.433549 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-var-lib-cni-multus\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.433575 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b-hosts-file\") pod \"node-resolver-rsc28\" (UID: \"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\") " pod="openshift-dns/node-resolver-rsc28" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.433600 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/575dcc54-1cfa-45ab-8c22-087fcf27f142-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.433622 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-hostroot\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.433957 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.446150 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.451464 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.451516 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.451537 5112 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.451656 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:41.951628264 +0000 UTC m=+78.961176965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.452879 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.452938 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.452959 5112 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.453006 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:41.952994891 +0000 UTC m=+78.962543592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.453966 5112 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.458127 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.461284 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.461347 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.461551 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.461992 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.462847 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.470867 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.478209 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.486409 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.493431 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.493531 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.493569 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.493581 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.493596 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.493606 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:41Z","lastTransitionTime":"2025-12-08T17:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.501508 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.510785 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.519712 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.527801 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.527879 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.527967 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.527971 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.529054 5112 scope.go:117] "RemoveContainer" containerID="7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.529293 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.531117 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.531603 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.532283 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.532876 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.534308 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs\") pod \"network-metrics-daemon-7jq8h\" (UID: \"3c4fb553-8514-4194-847c-96d40f8b41e3\") " pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.534366 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/95e46da0-94bb-4d22-804b-b3018984cdac-rootfs\") pod \"machine-config-daemon-s6wzf\" (UID: \"95e46da0-94bb-4d22-804b-b3018984cdac\") " pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.534390 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/575dcc54-1cfa-45ab-8c22-087fcf27f142-cni-binary-copy\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.534566 5112 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.534657 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs podName:3c4fb553-8514-4194-847c-96d40f8b41e3 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:42.034633778 +0000 UTC m=+79.044182479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs") pod "network-metrics-daemon-7jq8h" (UID: "3c4fb553-8514-4194-847c-96d40f8b41e3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.534929 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-slash\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.534975 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-multus-socket-dir-parent\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.534999 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-run-multus-certs\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535018 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbcf4\" (UniqueName: \"kubernetes.io/projected/288ee203-be3f-4176-90b2-7d95ee47aee8-kube-api-access-gbcf4\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535043 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-ovn\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535064 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-system-cni-dir\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535088 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-multus-cni-dir\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535124 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/288ee203-be3f-4176-90b2-7d95ee47aee8-cni-binary-copy\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535144 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-systemd-units\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535480 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-cni-bin\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535501 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535519 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-ovnkube-config\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535540 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-var-lib-cni-bin\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535561 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95e46da0-94bb-4d22-804b-b3018984cdac-mcd-auth-proxy-config\") pod \"machine-config-daemon-s6wzf\" (UID: \"95e46da0-94bb-4d22-804b-b3018984cdac\") " pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535584 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-run-netns\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535611 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535631 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-multus-conf-dir\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535662 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-cni-netd\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535691 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-56lk7\" (UniqueName: \"kubernetes.io/projected/95e46da0-94bb-4d22-804b-b3018984cdac-kube-api-access-56lk7\") pod \"machine-config-daemon-s6wzf\" (UID: \"95e46da0-94bb-4d22-804b-b3018984cdac\") " pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535709 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4pm48\" (UniqueName: \"kubernetes.io/projected/a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b-kube-api-access-4pm48\") pod \"node-resolver-rsc28\" (UID: \"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\") " pod="openshift-dns/node-resolver-rsc28" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535728 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-kubelet\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535750 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-run-ovn-kubernetes\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535772 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-var-lib-cni-multus\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535822 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b-hosts-file\") pod \"node-resolver-rsc28\" (UID: \"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\") " pod="openshift-dns/node-resolver-rsc28" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535844 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/575dcc54-1cfa-45ab-8c22-087fcf27f142-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535863 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vcrm\" (UniqueName: \"kubernetes.io/projected/0510de3f-316a-4902-a746-a746c3ce594c-kube-api-access-7vcrm\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535884 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-hostroot\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535907 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-var-lib-openvswitch\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535924 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-env-overrides\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.535950 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6-serviceca\") pod \"node-ca-4hrlr\" (UID: \"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\") " pod="openshift-image-registry/node-ca-4hrlr" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536032 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-systemd\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536113 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-node-log\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536138 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-run-k8s-cni-cncf-io\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536158 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-var-lib-kubelet\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536191 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/472d4dbe-4674-43ba-98da-98502eccb960-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-b7fmf\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536212 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/575dcc54-1cfa-45ab-8c22-087fcf27f142-system-cni-dir\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536239 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-log-socket\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536274 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-cnibin\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536312 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95e46da0-94bb-4d22-804b-b3018984cdac-proxy-tls\") pod \"machine-config-daemon-s6wzf\" (UID: \"95e46da0-94bb-4d22-804b-b3018984cdac\") " pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536336 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b-tmp-dir\") pod \"node-resolver-rsc28\" (UID: \"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\") " pod="openshift-dns/node-resolver-rsc28" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536357 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-etc-kubernetes\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536415 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/472d4dbe-4674-43ba-98da-98502eccb960-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-b7fmf\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536445 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/575dcc54-1cfa-45ab-8c22-087fcf27f142-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536472 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/575dcc54-1cfa-45ab-8c22-087fcf27f142-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536493 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5c98z\" (UniqueName: \"kubernetes.io/projected/575dcc54-1cfa-45ab-8c22-087fcf27f142-kube-api-access-5c98z\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536549 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536573 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-os-release\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536617 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/288ee203-be3f-4176-90b2-7d95ee47aee8-multus-daemon-config\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536634 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-system-cni-dir\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536643 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7mv6w\" (UniqueName: \"kubernetes.io/projected/3c4fb553-8514-4194-847c-96d40f8b41e3-kube-api-access-7mv6w\") pod \"network-metrics-daemon-7jq8h\" (UID: \"3c4fb553-8514-4194-847c-96d40f8b41e3\") " pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536728 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sv8p6\" (UniqueName: \"kubernetes.io/projected/472d4dbe-4674-43ba-98da-98502eccb960-kube-api-access-sv8p6\") pod \"ovnkube-control-plane-57b78d8988-b7fmf\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536952 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95e46da0-94bb-4d22-804b-b3018984cdac-mcd-auth-proxy-config\") pod \"machine-config-daemon-s6wzf\" (UID: \"95e46da0-94bb-4d22-804b-b3018984cdac\") " pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536966 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.537023 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-ovnkube-script-lib\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.537052 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-var-lib-cni-bin\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.537142 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-var-lib-cni-multus\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.537211 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b-hosts-file\") pod \"node-resolver-rsc28\" (UID: \"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\") " pod="openshift-dns/node-resolver-rsc28" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.537495 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/288ee203-be3f-4176-90b2-7d95ee47aee8-cni-binary-copy\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.537738 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-multus-socket-dir-parent\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.537788 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-run-multus-certs\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.537800 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/472d4dbe-4674-43ba-98da-98502eccb960-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-b7fmf\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.537812 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.537943 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-hostroot\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.537945 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/575dcc54-1cfa-45ab-8c22-087fcf27f142-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.536558 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-multus-cni-dir\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538066 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-os-release\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538081 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-etc-kubernetes\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538140 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/95e46da0-94bb-4d22-804b-b3018984cdac-rootfs\") pod \"machine-config-daemon-s6wzf\" (UID: \"95e46da0-94bb-4d22-804b-b3018984cdac\") " pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538150 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-cnibin\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538178 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/575dcc54-1cfa-45ab-8c22-087fcf27f142-system-cni-dir\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538211 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-var-lib-kubelet\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538211 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/575dcc54-1cfa-45ab-8c22-087fcf27f142-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538258 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-run-k8s-cni-cncf-io\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538268 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-multus-conf-dir\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538319 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/472d4dbe-4674-43ba-98da-98502eccb960-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-b7fmf\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538358 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/575dcc54-1cfa-45ab-8c22-087fcf27f142-cnibin\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538381 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-etc-openvswitch\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538404 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-88g7z\" (UniqueName: \"kubernetes.io/projected/5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6-kube-api-access-88g7z\") pod \"node-ca-4hrlr\" (UID: \"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\") " pod="openshift-image-registry/node-ca-4hrlr" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538425 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/575dcc54-1cfa-45ab-8c22-087fcf27f142-os-release\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538445 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-openvswitch\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538466 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0510de3f-316a-4902-a746-a746c3ce594c-ovn-node-metrics-cert\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538489 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/575dcc54-1cfa-45ab-8c22-087fcf27f142-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538542 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538601 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b-tmp-dir\") pod \"node-resolver-rsc28\" (UID: \"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\") " pod="openshift-dns/node-resolver-rsc28" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.538957 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/575dcc54-1cfa-45ab-8c22-087fcf27f142-cnibin\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.539193 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6-serviceca\") pod \"node-ca-4hrlr\" (UID: \"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\") " pod="openshift-image-registry/node-ca-4hrlr" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.539252 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/472d4dbe-4674-43ba-98da-98502eccb960-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-b7fmf\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.539460 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/288ee203-be3f-4176-90b2-7d95ee47aee8-multus-daemon-config\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.539714 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-run-netns\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.539718 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/575dcc54-1cfa-45ab-8c22-087fcf27f142-os-release\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.540183 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6-host\") pod \"node-ca-4hrlr\" (UID: \"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\") " pod="openshift-image-registry/node-ca-4hrlr" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.540278 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6-host\") pod \"node-ca-4hrlr\" (UID: \"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\") " pod="openshift-image-registry/node-ca-4hrlr" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.540318 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/288ee203-be3f-4176-90b2-7d95ee47aee8-host-run-netns\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.540549 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/575dcc54-1cfa-45ab-8c22-087fcf27f142-cni-binary-copy\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.545561 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95e46da0-94bb-4d22-804b-b3018984cdac-proxy-tls\") pod \"machine-config-daemon-s6wzf\" (UID: \"95e46da0-94bb-4d22-804b-b3018984cdac\") " pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.550489 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/472d4dbe-4674-43ba-98da-98502eccb960-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-b7fmf\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.551282 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.555887 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mv6w\" (UniqueName: \"kubernetes.io/projected/3c4fb553-8514-4194-847c-96d40f8b41e3-kube-api-access-7mv6w\") pod \"network-metrics-daemon-7jq8h\" (UID: \"3c4fb553-8514-4194-847c-96d40f8b41e3\") " pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.556397 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbcf4\" (UniqueName: \"kubernetes.io/projected/288ee203-be3f-4176-90b2-7d95ee47aee8-kube-api-access-gbcf4\") pod \"multus-kvv4v\" (UID: \"288ee203-be3f-4176-90b2-7d95ee47aee8\") " pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.556912 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pm48\" (UniqueName: \"kubernetes.io/projected/a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b-kube-api-access-4pm48\") pod \"node-resolver-rsc28\" (UID: \"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\") " pod="openshift-dns/node-resolver-rsc28" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.558542 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-88g7z\" (UniqueName: \"kubernetes.io/projected/5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6-kube-api-access-88g7z\") pod \"node-ca-4hrlr\" (UID: \"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\") " pod="openshift-image-registry/node-ca-4hrlr" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.558853 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv8p6\" (UniqueName: \"kubernetes.io/projected/472d4dbe-4674-43ba-98da-98502eccb960-kube-api-access-sv8p6\") pod \"ovnkube-control-plane-57b78d8988-b7fmf\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.560188 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-56lk7\" (UniqueName: \"kubernetes.io/projected/95e46da0-94bb-4d22-804b-b3018984cdac-kube-api-access-56lk7\") pod \"machine-config-daemon-s6wzf\" (UID: \"95e46da0-94bb-4d22-804b-b3018984cdac\") " pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.560717 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.561960 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c98z\" (UniqueName: \"kubernetes.io/projected/575dcc54-1cfa-45ab-8c22-087fcf27f142-kube-api-access-5c98z\") pod \"multus-additional-cni-plugins-9xjh5\" (UID: \"575dcc54-1cfa-45ab-8c22-087fcf27f142\") " pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.563339 5112 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.569230 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.577268 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.592469 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.595968 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.596007 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.596017 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.596032 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.596041 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:41Z","lastTransitionTime":"2025-12-08T17:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.604489 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.619637 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.622797 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.628809 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.630036 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.639884 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.640475 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.640520 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.640543 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.640567 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.640590 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.640610 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.640630 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.640958 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641026 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641068 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641121 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641191 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641229 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641257 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641291 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641276 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641318 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641437 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641448 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641491 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641513 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641529 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641562 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641579 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641594 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641609 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641675 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641693 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641709 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641724 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641739 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641753 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641772 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641811 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641827 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641846 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641866 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641881 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641897 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641915 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641937 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641958 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641985 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642009 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642031 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642051 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642073 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642112 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642133 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642156 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642175 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642197 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642222 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642246 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642271 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642287 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642305 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642323 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642342 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642357 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642372 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642390 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642409 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642426 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642442 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642461 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642480 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642498 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642519 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642541 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642571 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642588 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642608 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642633 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642651 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642668 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642685 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642701 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642722 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642738 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642756 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642775 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642792 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642808 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642825 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642843 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642860 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642877 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642895 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642925 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642976 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642998 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643016 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643033 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643051 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643068 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643087 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643143 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643179 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643197 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643213 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643230 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643248 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643326 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643346 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643393 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643410 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643429 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643447 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643466 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643487 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643505 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.643499 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:41 crc kubenswrapper[5112]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:41 crc kubenswrapper[5112]: set -o allexport Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: source /etc/kubernetes/apiserver-url.env Dec 08 17:41:41 crc kubenswrapper[5112]: else Dec 08 17:41:41 crc kubenswrapper[5112]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 17:41:41 crc kubenswrapper[5112]: exit 1 Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 17:41:41 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:41 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643526 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643695 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643733 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643760 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643787 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643822 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643849 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643904 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643931 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643957 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643983 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644016 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644042 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644084 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644363 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644402 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644462 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644488 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644515 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644572 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644598 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644625 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644649 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644674 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644699 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644734 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644774 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644894 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645027 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645060 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645155 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645193 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645223 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645250 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645278 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645318 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645344 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645372 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645401 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645431 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645487 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645516 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645568 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645719 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645774 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645832 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.646122 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.647869 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rsc28" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.653341 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.654920 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:41 crc kubenswrapper[5112]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: set -o allexport Dec 08 17:41:41 crc kubenswrapper[5112]: source "/env/_master" Dec 08 17:41:41 crc kubenswrapper[5112]: set +o allexport Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 17:41:41 crc kubenswrapper[5112]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 17:41:41 crc kubenswrapper[5112]: ho_enable="--enable-hybrid-overlay" Dec 08 17:41:41 crc kubenswrapper[5112]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 17:41:41 crc kubenswrapper[5112]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 17:41:41 crc kubenswrapper[5112]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 17:41:41 crc kubenswrapper[5112]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 17:41:41 crc kubenswrapper[5112]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 17:41:41 crc kubenswrapper[5112]: --webhook-host=127.0.0.1 \ Dec 08 17:41:41 crc kubenswrapper[5112]: --webhook-port=9743 \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${ho_enable} \ Dec 08 17:41:41 crc kubenswrapper[5112]: --enable-interconnect \ Dec 08 17:41:41 crc kubenswrapper[5112]: --disable-approver \ Dec 08 17:41:41 crc kubenswrapper[5112]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 17:41:41 crc kubenswrapper[5112]: --wait-for-kubernetes-api=200s \ Dec 08 17:41:41 crc kubenswrapper[5112]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 17:41:41 crc kubenswrapper[5112]: --loglevel="${LOGLEVEL}" Dec 08 17:41:41 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:41 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.641954 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642527 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642541 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642540 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642735 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642852 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.642893 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643311 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643814 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.643848 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644029 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644058 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644200 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644484 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644704 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.644752 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645030 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645230 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645261 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645347 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645375 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645629 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645729 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645771 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.645992 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.646002 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:41:42.145982813 +0000 UTC m=+79.155531514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.660273 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.660363 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.660581 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.660682 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.660715 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.660953 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.660958 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.661359 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.662676 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.663418 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.663584 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.663688 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.663652 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.663776 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.663940 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.646890 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.646973 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.646996 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.647351 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.647325 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.647362 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.647626 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.647663 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.648296 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.648460 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.648548 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.648629 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.648739 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.648831 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.648868 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.648973 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.649175 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.649333 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.649356 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.649639 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.648970 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.649665 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.650135 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.650296 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.650304 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.650384 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.650325 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.653381 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.653791 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.653959 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.654049 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.654347 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.654373 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.654514 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.654621 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.654653 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.654760 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.654989 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.655393 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.655419 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.655512 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.655577 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.655629 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.655647 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.655690 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.655953 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.656086 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.656330 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.656418 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.656560 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.656660 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.656682 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.656841 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.657056 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.657176 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.657298 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.657729 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.657997 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.654430 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.664299 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.664550 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.664403 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.664564 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.665118 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.665148 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.665570 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.665761 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666302 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666302 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666346 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666294 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666404 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666426 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666449 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666450 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666469 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666547 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666571 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666591 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666615 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666591 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666603 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666636 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666690 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666731 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666713 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666806 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.666805 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:41 crc kubenswrapper[5112]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: set -o allexport Dec 08 17:41:41 crc kubenswrapper[5112]: source "/env/_master" Dec 08 17:41:41 crc kubenswrapper[5112]: set +o allexport Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 17:41:41 crc kubenswrapper[5112]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 17:41:41 crc kubenswrapper[5112]: --disable-webhook \ Dec 08 17:41:41 crc kubenswrapper[5112]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 17:41:41 crc kubenswrapper[5112]: --loglevel="${LOGLEVEL}" Dec 08 17:41:41 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:41 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666842 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666875 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666896 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666906 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666938 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.666972 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667000 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667030 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667034 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667053 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667166 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667223 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667276 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667323 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667327 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667367 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667447 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667497 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667540 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667580 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667614 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.667630 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668169 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668393 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668568 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668602 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668614 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668648 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668662 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668684 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668715 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668727 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668761 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668795 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668820 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668847 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668883 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668912 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668943 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.668953 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.668968 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.669708 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670449 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670473 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670489 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"ef8a872843e6f3fc1893f5a5c8007f2e79d6511a34a2ce7a4cc117cf964385a8"} Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670496 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670543 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670571 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670593 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670625 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670644 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670663 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670682 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670702 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670725 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670751 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670771 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670793 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670814 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670834 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670853 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670873 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670893 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670913 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670955 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.671197 5112 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.671370 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-systemd-units\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.671397 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-cni-bin\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.671569 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.671599 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-ovnkube-config\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.671684 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-cni-bin\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.671711 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-systemd-units\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.671736 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672312 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-ovnkube-config\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.672349 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672395 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-run-netns\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672424 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-run-netns\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672514 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-cni-netd\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672552 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-cni-netd\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672583 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-kubelet\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672602 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-run-ovn-kubernetes\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672639 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-run-ovn-kubernetes\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672657 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-kubelet\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672677 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vcrm\" (UniqueName: \"kubernetes.io/projected/0510de3f-316a-4902-a746-a746c3ce594c-kube-api-access-7vcrm\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.674095 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4hrlr" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.675249 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-var-lib-openvswitch\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.675290 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-env-overrides\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.675343 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.675379 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-systemd\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.675426 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-node-log\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.675519 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-systemd\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.676434 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-env-overrides\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.676492 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-var-lib-openvswitch\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.676531 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-node-log\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.676581 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-log-socket\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.676642 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-log-socket\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.676880 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-ovnkube-script-lib\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.679092 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-ovnkube-script-lib\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.680817 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"6ca4cc0870dcaab27346c5f81feed4c6e576bfcaa9d34a50c57dc8342acfea34"} Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670051 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.681289 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.681375 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.681537 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.681803 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.681869 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.682075 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.676133 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.682190 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-etc-openvswitch\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670228 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.670161 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.671204 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.671492 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.671597 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672463 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672471 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672518 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672736 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672757 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.672887 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.673295 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.673171 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.673645 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.673482 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.673921 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.674651 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.675021 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.675061 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.675356 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.675946 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.676246 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.676674 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.677571 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.677593 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.678358 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.678556 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.678728 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.678910 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.679063 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.679569 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.679790 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.680195 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.680797 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.680859 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.681012 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.681165 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.681300 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.682358 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.682396 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-openvswitch\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.681875 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.682376 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.682441 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-openvswitch\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.682471 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0510de3f-316a-4902-a746-a746c3ce594c-ovn-node-metrics-cert\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.682708 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-etc-openvswitch\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.682806 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.682921 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-slash\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.682979 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.682985 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-slash\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.683139 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-ovn\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.683294 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-ovn\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.683558 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"a953cdcc5ff5fb5185180c05b2f81cb953721d1b62374285bf2e3f6c6b858c42"} Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.682184 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:41 crc kubenswrapper[5112]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:41 crc kubenswrapper[5112]: set -uo pipefail Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 17:41:41 crc kubenswrapper[5112]: HOSTS_FILE="/etc/hosts" Dec 08 17:41:41 crc kubenswrapper[5112]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: # Make a temporary file with the old hosts file's attributes. Dec 08 17:41:41 crc kubenswrapper[5112]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 17:41:41 crc kubenswrapper[5112]: echo "Failed to preserve hosts file. Exiting." Dec 08 17:41:41 crc kubenswrapper[5112]: exit 1 Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: while true; do Dec 08 17:41:41 crc kubenswrapper[5112]: declare -A svc_ips Dec 08 17:41:41 crc kubenswrapper[5112]: for svc in "${services[@]}"; do Dec 08 17:41:41 crc kubenswrapper[5112]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 17:41:41 crc kubenswrapper[5112]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 17:41:41 crc kubenswrapper[5112]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 17:41:41 crc kubenswrapper[5112]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 17:41:41 crc kubenswrapper[5112]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:41 crc kubenswrapper[5112]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:41 crc kubenswrapper[5112]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:41 crc kubenswrapper[5112]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 17:41:41 crc kubenswrapper[5112]: for i in ${!cmds[*]} Dec 08 17:41:41 crc kubenswrapper[5112]: do Dec 08 17:41:41 crc kubenswrapper[5112]: ips=($(eval "${cmds[i]}")) Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: svc_ips["${svc}"]="${ips[@]}" Dec 08 17:41:41 crc kubenswrapper[5112]: break Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: done Dec 08 17:41:41 crc kubenswrapper[5112]: done Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: # Update /etc/hosts only if we get valid service IPs Dec 08 17:41:41 crc kubenswrapper[5112]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 17:41:41 crc kubenswrapper[5112]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 17:41:41 crc kubenswrapper[5112]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 17:41:41 crc kubenswrapper[5112]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 17:41:41 crc kubenswrapper[5112]: sleep 60 & wait Dec 08 17:41:41 crc kubenswrapper[5112]: continue Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: # Append resolver entries for services Dec 08 17:41:41 crc kubenswrapper[5112]: rc=0 Dec 08 17:41:41 crc kubenswrapper[5112]: for svc in "${!svc_ips[@]}"; do Dec 08 17:41:41 crc kubenswrapper[5112]: for ip in ${svc_ips[${svc}]}; do Dec 08 17:41:41 crc kubenswrapper[5112]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 17:41:41 crc kubenswrapper[5112]: done Dec 08 17:41:41 crc kubenswrapper[5112]: done Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ $rc -ne 0 ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: sleep 60 & wait Dec 08 17:41:41 crc kubenswrapper[5112]: continue Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 17:41:41 crc kubenswrapper[5112]: # Replace /etc/hosts with our modified version if needed Dec 08 17:41:41 crc kubenswrapper[5112]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 17:41:41 crc kubenswrapper[5112]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: sleep 60 & wait Dec 08 17:41:41 crc kubenswrapper[5112]: unset svc_ips Dec 08 17:41:41 crc kubenswrapper[5112]: done Dec 08 17:41:41 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pm48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-rsc28_openshift-dns(a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:41 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.685303 5112 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.685805 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.685892 5112 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.685964 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686021 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686087 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686191 5112 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686262 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686323 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686388 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686448 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686508 5112 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686665 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686826 5112 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686894 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686950 5112 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687013 5112 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687071 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687313 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687395 5112 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687458 5112 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687525 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687587 5112 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687656 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687775 5112 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687888 5112 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.688040 5112 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.688142 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.688213 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.688267 5112 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.688413 5112 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.688509 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686602 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686629 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686633 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686795 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686804 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686816 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.686895 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.688620 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687016 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.688714 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.686780 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-rsc28" podUID="a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687170 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687180 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687255 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.689424 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:41 crc kubenswrapper[5112]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:41 crc kubenswrapper[5112]: set -o allexport Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: source /etc/kubernetes/apiserver-url.env Dec 08 17:41:41 crc kubenswrapper[5112]: else Dec 08 17:41:41 crc kubenswrapper[5112]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 17:41:41 crc kubenswrapper[5112]: exit 1 Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 17:41:41 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:41 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687488 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687809 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.689266 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:41 crc kubenswrapper[5112]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: set -o allexport Dec 08 17:41:41 crc kubenswrapper[5112]: source "/env/_master" Dec 08 17:41:41 crc kubenswrapper[5112]: set +o allexport Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 17:41:41 crc kubenswrapper[5112]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 17:41:41 crc kubenswrapper[5112]: ho_enable="--enable-hybrid-overlay" Dec 08 17:41:41 crc kubenswrapper[5112]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 17:41:41 crc kubenswrapper[5112]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 17:41:41 crc kubenswrapper[5112]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 17:41:41 crc kubenswrapper[5112]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 17:41:41 crc kubenswrapper[5112]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 17:41:41 crc kubenswrapper[5112]: --webhook-host=127.0.0.1 \ Dec 08 17:41:41 crc kubenswrapper[5112]: --webhook-port=9743 \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${ho_enable} \ Dec 08 17:41:41 crc kubenswrapper[5112]: --enable-interconnect \ Dec 08 17:41:41 crc kubenswrapper[5112]: --disable-approver \ Dec 08 17:41:41 crc kubenswrapper[5112]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 17:41:41 crc kubenswrapper[5112]: --wait-for-kubernetes-api=200s \ Dec 08 17:41:41 crc kubenswrapper[5112]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 17:41:41 crc kubenswrapper[5112]: --loglevel="${LOGLEVEL}" Dec 08 17:41:41 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:41 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687860 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.687999 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.689512 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.689544 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0510de3f-316a-4902-a746-a746c3ce594c-ovn-node-metrics-cert\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.688254 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.688322 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.689599 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.689259 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.689710 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.690147 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.690201 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.690330 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.690527 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.690539 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.690737 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kvv4v" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.690993 5112 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.691058 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.691165 5112 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.691305 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.691385 5112 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.691477 5112 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.691560 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.691644 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.691732 5112 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.691815 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692110 5112 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692190 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692249 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692308 5112 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692365 5112 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692416 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692468 5112 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692520 5112 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692580 5112 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692639 5112 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692702 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692763 5112 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692817 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692868 5112 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692934 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692991 5112 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693043 5112 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693225 5112 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693290 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693352 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693427 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693494 5112 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693548 5112 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693619 5112 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693674 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693730 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693821 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693875 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693934 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.693987 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694044 5112 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694113 5112 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694170 5112 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.692667 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694378 5112 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694445 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694575 5112 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694631 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694681 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694738 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694790 5112 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694848 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695128 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695198 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695253 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695316 5112 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695376 5112 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695434 5112 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695492 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695543 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695595 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695646 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695703 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695757 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695814 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695866 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694524 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694580 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.694933 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695186 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.694380 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:41 crc kubenswrapper[5112]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: set -o allexport Dec 08 17:41:41 crc kubenswrapper[5112]: source "/env/_master" Dec 08 17:41:41 crc kubenswrapper[5112]: set +o allexport Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 17:41:41 crc kubenswrapper[5112]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 17:41:41 crc kubenswrapper[5112]: --disable-webhook \ Dec 08 17:41:41 crc kubenswrapper[5112]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 17:41:41 crc kubenswrapper[5112]: --loglevel="${LOGLEVEL}" Dec 08 17:41:41 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:41 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.695007 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vcrm\" (UniqueName: \"kubernetes.io/projected/0510de3f-316a-4902-a746-a746c3ce594c-kube-api-access-7vcrm\") pod \"ovnkube-node-ng27z\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.696774 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.697826 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.698006 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.698048 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.698115 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.698180 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.698234 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.698269 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.698326 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.698740 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.699132 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.699210 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.699257 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.699632 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.701552 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.701585 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.701596 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.701613 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.701624 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:41Z","lastTransitionTime":"2025-12-08T17:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.702494 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.705411 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:41 crc kubenswrapper[5112]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 17:41:41 crc kubenswrapper[5112]: while [ true ]; Dec 08 17:41:41 crc kubenswrapper[5112]: do Dec 08 17:41:41 crc kubenswrapper[5112]: for f in $(ls /tmp/serviceca); do Dec 08 17:41:41 crc kubenswrapper[5112]: echo $f Dec 08 17:41:41 crc kubenswrapper[5112]: ca_file_path="/tmp/serviceca/${f}" Dec 08 17:41:41 crc kubenswrapper[5112]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 17:41:41 crc kubenswrapper[5112]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 17:41:41 crc kubenswrapper[5112]: if [ -e "${reg_dir_path}" ]; then Dec 08 17:41:41 crc kubenswrapper[5112]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 17:41:41 crc kubenswrapper[5112]: else Dec 08 17:41:41 crc kubenswrapper[5112]: mkdir $reg_dir_path Dec 08 17:41:41 crc kubenswrapper[5112]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: done Dec 08 17:41:41 crc kubenswrapper[5112]: for d in $(ls /etc/docker/certs.d); do Dec 08 17:41:41 crc kubenswrapper[5112]: echo $d Dec 08 17:41:41 crc kubenswrapper[5112]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 17:41:41 crc kubenswrapper[5112]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 17:41:41 crc kubenswrapper[5112]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 17:41:41 crc kubenswrapper[5112]: rm -rf /etc/docker/certs.d/$d Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: done Dec 08 17:41:41 crc kubenswrapper[5112]: sleep 60 & wait ${!} Dec 08 17:41:41 crc kubenswrapper[5112]: done Dec 08 17:41:41 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88g7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-4hrlr_openshift-image-registry(5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:41 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.705517 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:41 crc kubenswrapper[5112]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:41 crc kubenswrapper[5112]: set -euo pipefail Dec 08 17:41:41 crc kubenswrapper[5112]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 17:41:41 crc kubenswrapper[5112]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 17:41:41 crc kubenswrapper[5112]: # As the secret mount is optional we must wait for the files to be present. Dec 08 17:41:41 crc kubenswrapper[5112]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 17:41:41 crc kubenswrapper[5112]: TS=$(date +%s) Dec 08 17:41:41 crc kubenswrapper[5112]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 17:41:41 crc kubenswrapper[5112]: HAS_LOGGED_INFO=0 Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: log_missing_certs(){ Dec 08 17:41:41 crc kubenswrapper[5112]: CUR_TS=$(date +%s) Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 17:41:41 crc kubenswrapper[5112]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 17:41:41 crc kubenswrapper[5112]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 17:41:41 crc kubenswrapper[5112]: HAS_LOGGED_INFO=1 Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: } Dec 08 17:41:41 crc kubenswrapper[5112]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 17:41:41 crc kubenswrapper[5112]: log_missing_certs Dec 08 17:41:41 crc kubenswrapper[5112]: sleep 5 Dec 08 17:41:41 crc kubenswrapper[5112]: done Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 17:41:41 crc kubenswrapper[5112]: exec /usr/bin/kube-rbac-proxy \ Dec 08 17:41:41 crc kubenswrapper[5112]: --logtostderr \ Dec 08 17:41:41 crc kubenswrapper[5112]: --secure-listen-address=:9108 \ Dec 08 17:41:41 crc kubenswrapper[5112]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 17:41:41 crc kubenswrapper[5112]: --upstream=http://127.0.0.1:29108/ \ Dec 08 17:41:41 crc kubenswrapper[5112]: --tls-private-key-file=${TLS_PK} \ Dec 08 17:41:41 crc kubenswrapper[5112]: --tls-cert-file=${TLS_CERT} Dec 08 17:41:41 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv8p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-b7fmf_openshift-ovn-kubernetes(472d4dbe-4674-43ba-98da-98502eccb960): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:41 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.705536 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.707030 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-4hrlr" podUID="5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.707708 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:41 crc kubenswrapper[5112]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: set -o allexport Dec 08 17:41:41 crc kubenswrapper[5112]: source "/env/_master" Dec 08 17:41:41 crc kubenswrapper[5112]: set +o allexport Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: ovn_v4_join_subnet_opt= Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "" != "" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: ovn_v6_join_subnet_opt= Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "" != "" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: ovn_v4_transit_switch_subnet_opt= Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "" != "" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: ovn_v6_transit_switch_subnet_opt= Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "" != "" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: dns_name_resolver_enabled_flag= Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "false" == "true" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: # This is needed so that converting clusters from GA to TP Dec 08 17:41:41 crc kubenswrapper[5112]: # will rollout control plane pods as well Dec 08 17:41:41 crc kubenswrapper[5112]: network_segmentation_enabled_flag= Dec 08 17:41:41 crc kubenswrapper[5112]: multi_network_enabled_flag= Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "true" == "true" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: multi_network_enabled_flag="--enable-multi-network" Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "true" == "true" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "true" != "true" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: multi_network_enabled_flag="--enable-multi-network" Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: route_advertisements_enable_flag= Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "false" == "true" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: preconfigured_udn_addresses_enable_flag= Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "false" == "true" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 17:41:41 crc kubenswrapper[5112]: multi_network_policy_enabled_flag= Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "false" == "true" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 17:41:41 crc kubenswrapper[5112]: admin_network_policy_enabled_flag= Dec 08 17:41:41 crc kubenswrapper[5112]: if [[ "true" == "true" ]]; then Dec 08 17:41:41 crc kubenswrapper[5112]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: if [ "shared" == "shared" ]; then Dec 08 17:41:41 crc kubenswrapper[5112]: gateway_mode_flags="--gateway-mode shared" Dec 08 17:41:41 crc kubenswrapper[5112]: elif [ "shared" == "local" ]; then Dec 08 17:41:41 crc kubenswrapper[5112]: gateway_mode_flags="--gateway-mode local" Dec 08 17:41:41 crc kubenswrapper[5112]: else Dec 08 17:41:41 crc kubenswrapper[5112]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 17:41:41 crc kubenswrapper[5112]: exit 1 Dec 08 17:41:41 crc kubenswrapper[5112]: fi Dec 08 17:41:41 crc kubenswrapper[5112]: Dec 08 17:41:41 crc kubenswrapper[5112]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 17:41:41 crc kubenswrapper[5112]: exec /usr/bin/ovnkube \ Dec 08 17:41:41 crc kubenswrapper[5112]: --enable-interconnect \ Dec 08 17:41:41 crc kubenswrapper[5112]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 17:41:41 crc kubenswrapper[5112]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 17:41:41 crc kubenswrapper[5112]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 17:41:41 crc kubenswrapper[5112]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 17:41:41 crc kubenswrapper[5112]: --metrics-enable-pprof \ Dec 08 17:41:41 crc kubenswrapper[5112]: --metrics-enable-config-duration \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${ovn_v4_join_subnet_opt} \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${ovn_v6_join_subnet_opt} \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${dns_name_resolver_enabled_flag} \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${persistent_ips_enabled_flag} \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${multi_network_enabled_flag} \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${network_segmentation_enabled_flag} \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${gateway_mode_flags} \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${route_advertisements_enable_flag} \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 17:41:41 crc kubenswrapper[5112]: --enable-egress-ip=true \ Dec 08 17:41:41 crc kubenswrapper[5112]: --enable-egress-firewall=true \ Dec 08 17:41:41 crc kubenswrapper[5112]: --enable-egress-qos=true \ Dec 08 17:41:41 crc kubenswrapper[5112]: --enable-egress-service=true \ Dec 08 17:41:41 crc kubenswrapper[5112]: --enable-multicast \ Dec 08 17:41:41 crc kubenswrapper[5112]: --enable-multi-external-gateway=true \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${multi_network_policy_enabled_flag} \ Dec 08 17:41:41 crc kubenswrapper[5112]: ${admin_network_policy_enabled_flag} Dec 08 17:41:41 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv8p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-b7fmf_openshift-ovn-kubernetes(472d4dbe-4674-43ba-98da-98502eccb960): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:41 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.708000 5112 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c98z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-9xjh5_openshift-multus(575dcc54-1cfa-45ab-8c22-087fcf27f142): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.708776 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.708813 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" podUID="472d4dbe-4674-43ba-98da-98502eccb960" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.709074 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" podUID="575dcc54-1cfa-45ab-8c22-087fcf27f142" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.713699 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.716755 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:41 crc kubenswrapper[5112]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 17:41:41 crc kubenswrapper[5112]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 17:41:41 crc kubenswrapper[5112]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbcf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-kvv4v_openshift-multus(288ee203-be3f-4176-90b2-7d95ee47aee8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:41 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.718264 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-kvv4v" podUID="288ee203-be3f-4176-90b2-7d95ee47aee8" Dec 08 17:41:41 crc kubenswrapper[5112]: W1208 17:41:41.720424 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95e46da0_94bb_4d22_804b_b3018984cdac.slice/crio-a986e276f02f9b8263adce362f4152c857bdbefb95f0b72935b7fa4190236165 WatchSource:0}: Error finding container a986e276f02f9b8263adce362f4152c857bdbefb95f0b72935b7fa4190236165: Status 404 returned error can't find the container with id a986e276f02f9b8263adce362f4152c857bdbefb95f0b72935b7fa4190236165 Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.722861 5112 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56lk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-s6wzf_openshift-machine-config-operator(95e46da0-94bb-4d22-804b-b3018984cdac): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.725188 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.725378 5112 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56lk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-s6wzf_openshift-machine-config-operator(95e46da0-94bb-4d22-804b-b3018984cdac): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.726786 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.727855 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.728029 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.740584 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.763214 5112 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.763990 5112 scope.go:117] "RemoveContainer" containerID="7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.764201 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.771407 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.798967 5112 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799001 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799012 5112 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799023 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799035 5112 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799050 5112 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799063 5112 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799081 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799115 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799127 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799172 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799190 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799230 5112 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799242 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799254 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799308 5112 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799321 5112 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799332 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799342 5112 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799353 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799366 5112 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799378 5112 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799389 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799399 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799413 5112 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799424 5112 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799435 5112 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799447 5112 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799457 5112 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799468 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799479 5112 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799490 5112 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799519 5112 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799532 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799544 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799557 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799599 5112 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799611 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799623 5112 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799634 5112 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799645 5112 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799655 5112 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799665 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799676 5112 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799687 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799698 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799710 5112 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799722 5112 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799736 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799747 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799760 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799771 5112 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799783 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799795 5112 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799807 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799820 5112 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.799832 5112 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.809832 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.809872 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.809886 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.809920 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.809948 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.809963 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810003 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810019 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810029 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810038 5112 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810051 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810096 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810106 5112 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810116 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810125 5112 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810172 5112 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810183 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810194 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810203 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810261 5112 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810272 5112 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810282 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810613 5112 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810624 5112 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810635 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810644 5112 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.810975 5112 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.811008 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.811024 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.811038 5112 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.811056 5112 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.811069 5112 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.811102 5112 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.811120 5112 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.811133 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.812767 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.812820 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.812845 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.812868 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.812883 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:41Z","lastTransitionTime":"2025-12-08T17:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816149 5112 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816184 5112 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816196 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816206 5112 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816223 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816236 5112 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816246 5112 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816260 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816272 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816284 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816294 5112 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816307 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816317 5112 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816327 5112 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816337 5112 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816349 5112 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816358 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816368 5112 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816378 5112 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816389 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816398 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816408 5112 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816422 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816431 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816441 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816450 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816463 5112 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816472 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816482 5112 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816492 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816504 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816513 5112 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816523 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.816856 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.849056 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.866013 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:41:41 crc kubenswrapper[5112]: W1208 17:41:41.879405 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0510de3f_316a_4902_a746_a746c3ce594c.slice/crio-23a017c1e028b6e6e5891a0947073823a15913426838b1754ef91de5e8f88124 WatchSource:0}: Error finding container 23a017c1e028b6e6e5891a0947073823a15913426838b1754ef91de5e8f88124: Status 404 returned error can't find the container with id 23a017c1e028b6e6e5891a0947073823a15913426838b1754ef91de5e8f88124 Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.882490 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:41 crc kubenswrapper[5112]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 17:41:41 crc kubenswrapper[5112]: apiVersion: v1 Dec 08 17:41:41 crc kubenswrapper[5112]: clusters: Dec 08 17:41:41 crc kubenswrapper[5112]: - cluster: Dec 08 17:41:41 crc kubenswrapper[5112]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 17:41:41 crc kubenswrapper[5112]: server: https://api-int.crc.testing:6443 Dec 08 17:41:41 crc kubenswrapper[5112]: name: default-cluster Dec 08 17:41:41 crc kubenswrapper[5112]: contexts: Dec 08 17:41:41 crc kubenswrapper[5112]: - context: Dec 08 17:41:41 crc kubenswrapper[5112]: cluster: default-cluster Dec 08 17:41:41 crc kubenswrapper[5112]: namespace: default Dec 08 17:41:41 crc kubenswrapper[5112]: user: default-auth Dec 08 17:41:41 crc kubenswrapper[5112]: name: default-context Dec 08 17:41:41 crc kubenswrapper[5112]: current-context: default-context Dec 08 17:41:41 crc kubenswrapper[5112]: kind: Config Dec 08 17:41:41 crc kubenswrapper[5112]: preferences: {} Dec 08 17:41:41 crc kubenswrapper[5112]: users: Dec 08 17:41:41 crc kubenswrapper[5112]: - name: default-auth Dec 08 17:41:41 crc kubenswrapper[5112]: user: Dec 08 17:41:41 crc kubenswrapper[5112]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 17:41:41 crc kubenswrapper[5112]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 17:41:41 crc kubenswrapper[5112]: EOF Dec 08 17:41:41 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vcrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-ng27z_openshift-ovn-kubernetes(0510de3f-316a-4902-a746-a746c3ce594c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:41 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:41 crc kubenswrapper[5112]: E1208 17:41:41.883728 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" podUID="0510de3f-316a-4902-a746-a746c3ce594c" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.886634 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.915996 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.916074 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.916110 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.916128 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.916138 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:41Z","lastTransitionTime":"2025-12-08T17:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.926146 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:41 crc kubenswrapper[5112]: I1208 17:41:41.968274 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.004539 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.017861 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.017993 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.018016 5112 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.018159 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.018177 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.018190 5112 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.018194 5112 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.018164 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:43.018133147 +0000 UTC m=+80.027681878 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.018027 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.018259 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:43.018236789 +0000 UTC m=+80.027785520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.018285 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:43.01827617 +0000 UTC m=+80.027824941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.018342 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.018444 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.018459 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.018468 5112 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.018501 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:43.018491086 +0000 UTC m=+80.028039797 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.019263 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.019291 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.019303 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.019447 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.019467 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:42Z","lastTransitionTime":"2025-12-08T17:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.049129 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.089946 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.119183 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs\") pod \"network-metrics-daemon-7jq8h\" (UID: \"3c4fb553-8514-4194-847c-96d40f8b41e3\") " pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.119397 5112 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.119536 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs podName:3c4fb553-8514-4194-847c-96d40f8b41e3 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:43.119507594 +0000 UTC m=+80.129056335 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs") pod "network-metrics-daemon-7jq8h" (UID: "3c4fb553-8514-4194-847c-96d40f8b41e3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.122220 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.122373 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.122394 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.122419 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.122438 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:42Z","lastTransitionTime":"2025-12-08T17:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.130320 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.168901 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.213513 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.221235 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.221541 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:41:43.221504728 +0000 UTC m=+80.231053469 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.225012 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.225068 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.225120 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.225143 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.225160 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:42Z","lastTransitionTime":"2025-12-08T17:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.247814 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.290781 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.325292 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.326977 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.327138 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.327150 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.327166 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.327176 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:42Z","lastTransitionTime":"2025-12-08T17:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.366010 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.406394 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.429211 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.429265 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.429277 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.429291 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.429300 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:42Z","lastTransitionTime":"2025-12-08T17:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.444658 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.492677 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.529382 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.530709 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.530752 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.530782 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.530799 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.530811 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:42Z","lastTransitionTime":"2025-12-08T17:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.570222 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.606701 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.633944 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.634027 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.634046 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.634067 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.634106 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:42Z","lastTransitionTime":"2025-12-08T17:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.652696 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.684733 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.687252 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerStarted","Data":"23a017c1e028b6e6e5891a0947073823a15913426838b1754ef91de5e8f88124"} Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.688178 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4hrlr" event={"ID":"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6","Type":"ContainerStarted","Data":"fbf296ef092fe57389094aadb85518671a8e88d5ec5e9c8fa3a8fe3d10343da0"} Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.689001 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:42 crc kubenswrapper[5112]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 17:41:42 crc kubenswrapper[5112]: apiVersion: v1 Dec 08 17:41:42 crc kubenswrapper[5112]: clusters: Dec 08 17:41:42 crc kubenswrapper[5112]: - cluster: Dec 08 17:41:42 crc kubenswrapper[5112]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 17:41:42 crc kubenswrapper[5112]: server: https://api-int.crc.testing:6443 Dec 08 17:41:42 crc kubenswrapper[5112]: name: default-cluster Dec 08 17:41:42 crc kubenswrapper[5112]: contexts: Dec 08 17:41:42 crc kubenswrapper[5112]: - context: Dec 08 17:41:42 crc kubenswrapper[5112]: cluster: default-cluster Dec 08 17:41:42 crc kubenswrapper[5112]: namespace: default Dec 08 17:41:42 crc kubenswrapper[5112]: user: default-auth Dec 08 17:41:42 crc kubenswrapper[5112]: name: default-context Dec 08 17:41:42 crc kubenswrapper[5112]: current-context: default-context Dec 08 17:41:42 crc kubenswrapper[5112]: kind: Config Dec 08 17:41:42 crc kubenswrapper[5112]: preferences: {} Dec 08 17:41:42 crc kubenswrapper[5112]: users: Dec 08 17:41:42 crc kubenswrapper[5112]: - name: default-auth Dec 08 17:41:42 crc kubenswrapper[5112]: user: Dec 08 17:41:42 crc kubenswrapper[5112]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 17:41:42 crc kubenswrapper[5112]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 17:41:42 crc kubenswrapper[5112]: EOF Dec 08 17:41:42 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vcrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-ng27z_openshift-ovn-kubernetes(0510de3f-316a-4902-a746-a746c3ce594c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:42 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.689603 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" event={"ID":"95e46da0-94bb-4d22-804b-b3018984cdac","Type":"ContainerStarted","Data":"a986e276f02f9b8263adce362f4152c857bdbefb95f0b72935b7fa4190236165"} Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.689662 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:42 crc kubenswrapper[5112]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 17:41:42 crc kubenswrapper[5112]: while [ true ]; Dec 08 17:41:42 crc kubenswrapper[5112]: do Dec 08 17:41:42 crc kubenswrapper[5112]: for f in $(ls /tmp/serviceca); do Dec 08 17:41:42 crc kubenswrapper[5112]: echo $f Dec 08 17:41:42 crc kubenswrapper[5112]: ca_file_path="/tmp/serviceca/${f}" Dec 08 17:41:42 crc kubenswrapper[5112]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 17:41:42 crc kubenswrapper[5112]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 17:41:42 crc kubenswrapper[5112]: if [ -e "${reg_dir_path}" ]; then Dec 08 17:41:42 crc kubenswrapper[5112]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 17:41:42 crc kubenswrapper[5112]: else Dec 08 17:41:42 crc kubenswrapper[5112]: mkdir $reg_dir_path Dec 08 17:41:42 crc kubenswrapper[5112]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: done Dec 08 17:41:42 crc kubenswrapper[5112]: for d in $(ls /etc/docker/certs.d); do Dec 08 17:41:42 crc kubenswrapper[5112]: echo $d Dec 08 17:41:42 crc kubenswrapper[5112]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 17:41:42 crc kubenswrapper[5112]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 17:41:42 crc kubenswrapper[5112]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 17:41:42 crc kubenswrapper[5112]: rm -rf /etc/docker/certs.d/$d Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: done Dec 08 17:41:42 crc kubenswrapper[5112]: sleep 60 & wait ${!} Dec 08 17:41:42 crc kubenswrapper[5112]: done Dec 08 17:41:42 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88g7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-4hrlr_openshift-image-registry(5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:42 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.690194 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" podUID="0510de3f-316a-4902-a746-a746c3ce594c" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.690709 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-4hrlr" podUID="5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.690884 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" event={"ID":"575dcc54-1cfa-45ab-8c22-087fcf27f142","Type":"ContainerStarted","Data":"22aa2759e49dc430126c5b8f12476a6b0d1c52c5c03307a193666683aa194c1d"} Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.690923 5112 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56lk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-s6wzf_openshift-machine-config-operator(95e46da0-94bb-4d22-804b-b3018984cdac): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.692156 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" event={"ID":"472d4dbe-4674-43ba-98da-98502eccb960","Type":"ContainerStarted","Data":"4fe90487572f15dee0fd51ad86b86ff796accb27f36bbd9d0738df2a8cd05aed"} Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.693844 5112 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c98z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-9xjh5_openshift-multus(575dcc54-1cfa-45ab-8c22-087fcf27f142): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.694043 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:42 crc kubenswrapper[5112]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:42 crc kubenswrapper[5112]: set -euo pipefail Dec 08 17:41:42 crc kubenswrapper[5112]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 17:41:42 crc kubenswrapper[5112]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 17:41:42 crc kubenswrapper[5112]: # As the secret mount is optional we must wait for the files to be present. Dec 08 17:41:42 crc kubenswrapper[5112]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 17:41:42 crc kubenswrapper[5112]: TS=$(date +%s) Dec 08 17:41:42 crc kubenswrapper[5112]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 17:41:42 crc kubenswrapper[5112]: HAS_LOGGED_INFO=0 Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: log_missing_certs(){ Dec 08 17:41:42 crc kubenswrapper[5112]: CUR_TS=$(date +%s) Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 17:41:42 crc kubenswrapper[5112]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 17:41:42 crc kubenswrapper[5112]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 17:41:42 crc kubenswrapper[5112]: HAS_LOGGED_INFO=1 Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: } Dec 08 17:41:42 crc kubenswrapper[5112]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 17:41:42 crc kubenswrapper[5112]: log_missing_certs Dec 08 17:41:42 crc kubenswrapper[5112]: sleep 5 Dec 08 17:41:42 crc kubenswrapper[5112]: done Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 17:41:42 crc kubenswrapper[5112]: exec /usr/bin/kube-rbac-proxy \ Dec 08 17:41:42 crc kubenswrapper[5112]: --logtostderr \ Dec 08 17:41:42 crc kubenswrapper[5112]: --secure-listen-address=:9108 \ Dec 08 17:41:42 crc kubenswrapper[5112]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 17:41:42 crc kubenswrapper[5112]: --upstream=http://127.0.0.1:29108/ \ Dec 08 17:41:42 crc kubenswrapper[5112]: --tls-private-key-file=${TLS_PK} \ Dec 08 17:41:42 crc kubenswrapper[5112]: --tls-cert-file=${TLS_CERT} Dec 08 17:41:42 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv8p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-b7fmf_openshift-ovn-kubernetes(472d4dbe-4674-43ba-98da-98502eccb960): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:42 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.694405 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kvv4v" event={"ID":"288ee203-be3f-4176-90b2-7d95ee47aee8","Type":"ContainerStarted","Data":"527458de87f958b13e0e947296dd0cf46453e4a4709324d8a9c45e958d02266f"} Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.695007 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" podUID="575dcc54-1cfa-45ab-8c22-087fcf27f142" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.695816 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rsc28" event={"ID":"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b","Type":"ContainerStarted","Data":"0a6053c5a05d0f57d5929f078e3081740ebe21236283621e542b38d891cd61ca"} Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.697390 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:42 crc kubenswrapper[5112]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 17:41:42 crc kubenswrapper[5112]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 17:41:42 crc kubenswrapper[5112]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbcf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-kvv4v_openshift-multus(288ee203-be3f-4176-90b2-7d95ee47aee8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:42 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.697521 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:42 crc kubenswrapper[5112]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: set -o allexport Dec 08 17:41:42 crc kubenswrapper[5112]: source "/env/_master" Dec 08 17:41:42 crc kubenswrapper[5112]: set +o allexport Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: ovn_v4_join_subnet_opt= Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "" != "" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: ovn_v6_join_subnet_opt= Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "" != "" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: ovn_v4_transit_switch_subnet_opt= Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "" != "" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: ovn_v6_transit_switch_subnet_opt= Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "" != "" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: dns_name_resolver_enabled_flag= Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "false" == "true" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: # This is needed so that converting clusters from GA to TP Dec 08 17:41:42 crc kubenswrapper[5112]: # will rollout control plane pods as well Dec 08 17:41:42 crc kubenswrapper[5112]: network_segmentation_enabled_flag= Dec 08 17:41:42 crc kubenswrapper[5112]: multi_network_enabled_flag= Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "true" == "true" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: multi_network_enabled_flag="--enable-multi-network" Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "true" == "true" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "true" != "true" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: multi_network_enabled_flag="--enable-multi-network" Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: route_advertisements_enable_flag= Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "false" == "true" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: preconfigured_udn_addresses_enable_flag= Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "false" == "true" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 17:41:42 crc kubenswrapper[5112]: multi_network_policy_enabled_flag= Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "false" == "true" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 17:41:42 crc kubenswrapper[5112]: admin_network_policy_enabled_flag= Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "true" == "true" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: if [ "shared" == "shared" ]; then Dec 08 17:41:42 crc kubenswrapper[5112]: gateway_mode_flags="--gateway-mode shared" Dec 08 17:41:42 crc kubenswrapper[5112]: elif [ "shared" == "local" ]; then Dec 08 17:41:42 crc kubenswrapper[5112]: gateway_mode_flags="--gateway-mode local" Dec 08 17:41:42 crc kubenswrapper[5112]: else Dec 08 17:41:42 crc kubenswrapper[5112]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 17:41:42 crc kubenswrapper[5112]: exit 1 Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 17:41:42 crc kubenswrapper[5112]: exec /usr/bin/ovnkube \ Dec 08 17:41:42 crc kubenswrapper[5112]: --enable-interconnect \ Dec 08 17:41:42 crc kubenswrapper[5112]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 17:41:42 crc kubenswrapper[5112]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 17:41:42 crc kubenswrapper[5112]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 17:41:42 crc kubenswrapper[5112]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 17:41:42 crc kubenswrapper[5112]: --metrics-enable-pprof \ Dec 08 17:41:42 crc kubenswrapper[5112]: --metrics-enable-config-duration \ Dec 08 17:41:42 crc kubenswrapper[5112]: ${ovn_v4_join_subnet_opt} \ Dec 08 17:41:42 crc kubenswrapper[5112]: ${ovn_v6_join_subnet_opt} \ Dec 08 17:41:42 crc kubenswrapper[5112]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 17:41:42 crc kubenswrapper[5112]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 17:41:42 crc kubenswrapper[5112]: ${dns_name_resolver_enabled_flag} \ Dec 08 17:41:42 crc kubenswrapper[5112]: ${persistent_ips_enabled_flag} \ Dec 08 17:41:42 crc kubenswrapper[5112]: ${multi_network_enabled_flag} \ Dec 08 17:41:42 crc kubenswrapper[5112]: ${network_segmentation_enabled_flag} \ Dec 08 17:41:42 crc kubenswrapper[5112]: ${gateway_mode_flags} \ Dec 08 17:41:42 crc kubenswrapper[5112]: ${route_advertisements_enable_flag} \ Dec 08 17:41:42 crc kubenswrapper[5112]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 17:41:42 crc kubenswrapper[5112]: --enable-egress-ip=true \ Dec 08 17:41:42 crc kubenswrapper[5112]: --enable-egress-firewall=true \ Dec 08 17:41:42 crc kubenswrapper[5112]: --enable-egress-qos=true \ Dec 08 17:41:42 crc kubenswrapper[5112]: --enable-egress-service=true \ Dec 08 17:41:42 crc kubenswrapper[5112]: --enable-multicast \ Dec 08 17:41:42 crc kubenswrapper[5112]: --enable-multi-external-gateway=true \ Dec 08 17:41:42 crc kubenswrapper[5112]: ${multi_network_policy_enabled_flag} \ Dec 08 17:41:42 crc kubenswrapper[5112]: ${admin_network_policy_enabled_flag} Dec 08 17:41:42 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv8p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-b7fmf_openshift-ovn-kubernetes(472d4dbe-4674-43ba-98da-98502eccb960): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:42 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.698542 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:42 crc kubenswrapper[5112]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:42 crc kubenswrapper[5112]: set -uo pipefail Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 17:41:42 crc kubenswrapper[5112]: HOSTS_FILE="/etc/hosts" Dec 08 17:41:42 crc kubenswrapper[5112]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: # Make a temporary file with the old hosts file's attributes. Dec 08 17:41:42 crc kubenswrapper[5112]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 17:41:42 crc kubenswrapper[5112]: echo "Failed to preserve hosts file. Exiting." Dec 08 17:41:42 crc kubenswrapper[5112]: exit 1 Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: while true; do Dec 08 17:41:42 crc kubenswrapper[5112]: declare -A svc_ips Dec 08 17:41:42 crc kubenswrapper[5112]: for svc in "${services[@]}"; do Dec 08 17:41:42 crc kubenswrapper[5112]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 17:41:42 crc kubenswrapper[5112]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 17:41:42 crc kubenswrapper[5112]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 17:41:42 crc kubenswrapper[5112]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 17:41:42 crc kubenswrapper[5112]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:42 crc kubenswrapper[5112]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:42 crc kubenswrapper[5112]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:42 crc kubenswrapper[5112]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 17:41:42 crc kubenswrapper[5112]: for i in ${!cmds[*]} Dec 08 17:41:42 crc kubenswrapper[5112]: do Dec 08 17:41:42 crc kubenswrapper[5112]: ips=($(eval "${cmds[i]}")) Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: svc_ips["${svc}"]="${ips[@]}" Dec 08 17:41:42 crc kubenswrapper[5112]: break Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: done Dec 08 17:41:42 crc kubenswrapper[5112]: done Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: # Update /etc/hosts only if we get valid service IPs Dec 08 17:41:42 crc kubenswrapper[5112]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 17:41:42 crc kubenswrapper[5112]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 17:41:42 crc kubenswrapper[5112]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 17:41:42 crc kubenswrapper[5112]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 17:41:42 crc kubenswrapper[5112]: sleep 60 & wait Dec 08 17:41:42 crc kubenswrapper[5112]: continue Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: # Append resolver entries for services Dec 08 17:41:42 crc kubenswrapper[5112]: rc=0 Dec 08 17:41:42 crc kubenswrapper[5112]: for svc in "${!svc_ips[@]}"; do Dec 08 17:41:42 crc kubenswrapper[5112]: for ip in ${svc_ips[${svc}]}; do Dec 08 17:41:42 crc kubenswrapper[5112]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 17:41:42 crc kubenswrapper[5112]: done Dec 08 17:41:42 crc kubenswrapper[5112]: done Dec 08 17:41:42 crc kubenswrapper[5112]: if [[ $rc -ne 0 ]]; then Dec 08 17:41:42 crc kubenswrapper[5112]: sleep 60 & wait Dec 08 17:41:42 crc kubenswrapper[5112]: continue Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: Dec 08 17:41:42 crc kubenswrapper[5112]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 17:41:42 crc kubenswrapper[5112]: # Replace /etc/hosts with our modified version if needed Dec 08 17:41:42 crc kubenswrapper[5112]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 17:41:42 crc kubenswrapper[5112]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 17:41:42 crc kubenswrapper[5112]: fi Dec 08 17:41:42 crc kubenswrapper[5112]: sleep 60 & wait Dec 08 17:41:42 crc kubenswrapper[5112]: unset svc_ips Dec 08 17:41:42 crc kubenswrapper[5112]: done Dec 08 17:41:42 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pm48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-rsc28_openshift-dns(a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:42 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.698547 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-kvv4v" podUID="288ee203-be3f-4176-90b2-7d95ee47aee8" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.698630 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" podUID="472d4dbe-4674-43ba-98da-98502eccb960" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.698713 5112 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.698439 5112 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56lk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-s6wzf_openshift-machine-config-operator(95e46da0-94bb-4d22-804b-b3018984cdac): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.699979 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.700018 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-rsc28" podUID="a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b" Dec 08 17:41:42 crc kubenswrapper[5112]: E1208 17:41:42.700069 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.724483 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.736174 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.736253 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.736267 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.736290 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.736352 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:42Z","lastTransitionTime":"2025-12-08T17:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.776308 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.807427 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.838194 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.838452 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.838465 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.838483 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.838497 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:42Z","lastTransitionTime":"2025-12-08T17:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.845994 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.884523 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.925395 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.940789 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.940848 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.940861 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.940879 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.940892 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:42Z","lastTransitionTime":"2025-12-08T17:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:42 crc kubenswrapper[5112]: I1208 17:41:42.965149 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.014433 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.030368 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.030579 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.030624 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.030639 5112 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.030642 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.030707 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:45.030686631 +0000 UTC m=+82.040235332 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.030784 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.030951 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.030785 5112 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.031206 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:45.031193124 +0000 UTC m=+82.040741825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.030900 5112 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.031040 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.031345 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.031360 5112 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.031360 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:45.031323588 +0000 UTC m=+82.040872329 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.031394 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:45.031385579 +0000 UTC m=+82.040934400 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.043326 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.043405 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.043419 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.043439 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.043451 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.049030 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.092980 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.125894 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.132479 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs\") pod \"network-metrics-daemon-7jq8h\" (UID: \"3c4fb553-8514-4194-847c-96d40f8b41e3\") " pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.132676 5112 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.132789 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs podName:3c4fb553-8514-4194-847c-96d40f8b41e3 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:45.132768337 +0000 UTC m=+82.142317028 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs") pod "network-metrics-daemon-7jq8h" (UID: "3c4fb553-8514-4194-847c-96d40f8b41e3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.145372 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.145427 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.145440 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.145458 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.145470 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.170036 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.204363 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.233802 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.234037 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:41:45.234020291 +0000 UTC m=+82.243568992 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.244896 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.247658 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.247713 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.247729 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.247754 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.247769 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.296486 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.316230 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.316290 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.316242 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.316377 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.316299 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.316459 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.316556 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.316666 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.320797 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.321518 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.323248 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.324530 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.326239 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.326572 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.328281 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.329537 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.330872 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.331499 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.332821 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.333764 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.335148 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.335790 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.337327 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.337773 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.338424 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.339480 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.340598 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.341833 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.342688 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.343568 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.345527 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.346579 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.348185 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.349393 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.349460 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.349472 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.349488 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.349539 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.349499 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.350535 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.351991 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.352690 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.354656 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.355541 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.357021 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.358451 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.359765 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.359709 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.361516 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.362450 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.363647 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.363749 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.363783 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.363794 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.363810 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.363820 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.364622 5112 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.364750 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.368890 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.372650 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.373640 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.374470 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.375603 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.377052 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.377086 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.377111 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.377127 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.377139 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.377599 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.378262 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.380625 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.381848 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.385615 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.387183 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.387328 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.389515 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.390602 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.391907 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.391954 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.391967 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.391976 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.391983 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.392113 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.392898 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.393730 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.395559 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.396895 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.400054 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.400893 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.402441 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.403783 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.405074 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.405442 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.407746 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.407787 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.407797 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.407811 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.407821 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.418530 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: E1208 17:41:43.418655 5112 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.419757 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.419788 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.419798 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.419816 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.419829 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.445723 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.488532 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.522268 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.522306 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.522315 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.522330 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.522342 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.525279 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.567597 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.611680 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.624129 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.624181 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.624198 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.624220 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.624237 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.645200 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.687500 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.726520 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.726582 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.726665 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.726691 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.726706 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.730376 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.764011 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.810198 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.828369 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.828436 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.828455 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.828478 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.828497 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.850980 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.892169 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.927115 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.930634 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.930675 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.930693 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.930716 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.930733 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5112]: I1208 17:41:43.966023 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.003501 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.033069 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.033122 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.033136 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.033151 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.033161 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:44Z","lastTransitionTime":"2025-12-08T17:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.043598 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.094080 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.125726 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.135419 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.135470 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.135483 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.135502 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.135516 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:44Z","lastTransitionTime":"2025-12-08T17:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.165638 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.206422 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.237071 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.237135 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.237145 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.237160 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.237171 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:44Z","lastTransitionTime":"2025-12-08T17:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.244618 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.285402 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.338777 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.338835 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.338852 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.338871 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.338892 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:44Z","lastTransitionTime":"2025-12-08T17:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.440699 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.440765 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.440775 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.440788 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.440797 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:44Z","lastTransitionTime":"2025-12-08T17:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.543685 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.543727 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.543738 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.543753 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.543763 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:44Z","lastTransitionTime":"2025-12-08T17:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.645918 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.645952 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.645962 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.645975 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.645984 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:44Z","lastTransitionTime":"2025-12-08T17:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.748358 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.748429 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.748455 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.748477 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.748494 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:44Z","lastTransitionTime":"2025-12-08T17:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.850465 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.850501 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.850509 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.850521 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.850530 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:44Z","lastTransitionTime":"2025-12-08T17:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.952068 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.952339 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.952415 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.952493 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:44 crc kubenswrapper[5112]: I1208 17:41:44.952578 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:44Z","lastTransitionTime":"2025-12-08T17:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.051921 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.051980 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.052007 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.052026 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.052148 5112 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.052201 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:49.0521874 +0000 UTC m=+86.061736101 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.052757 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.052777 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.052787 5112 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.052813 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:49.052805517 +0000 UTC m=+86.062354218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.052906 5112 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.052961 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:49.052947311 +0000 UTC m=+86.062496032 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.053108 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.053207 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.053286 5112 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.053408 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:49.053386602 +0000 UTC m=+86.062935303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.054259 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.054293 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.054304 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.054319 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.054331 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:45Z","lastTransitionTime":"2025-12-08T17:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.153170 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs\") pod \"network-metrics-daemon-7jq8h\" (UID: \"3c4fb553-8514-4194-847c-96d40f8b41e3\") " pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.153341 5112 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.153440 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs podName:3c4fb553-8514-4194-847c-96d40f8b41e3 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:49.153418904 +0000 UTC m=+86.162967655 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs") pod "network-metrics-daemon-7jq8h" (UID: "3c4fb553-8514-4194-847c-96d40f8b41e3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.156291 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.156330 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.156341 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.156391 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.156405 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:45Z","lastTransitionTime":"2025-12-08T17:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.253830 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.253961 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:41:49.253938199 +0000 UTC m=+86.263486910 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.258282 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.258330 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.258343 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.258358 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.258368 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:45Z","lastTransitionTime":"2025-12-08T17:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.315645 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.315902 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.316048 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.315789 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.316176 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.316273 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.316298 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:45 crc kubenswrapper[5112]: E1208 17:41:45.316525 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.360625 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.360701 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.360716 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.360732 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.360744 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:45Z","lastTransitionTime":"2025-12-08T17:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.463701 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.463780 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.463805 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.463834 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.463859 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:45Z","lastTransitionTime":"2025-12-08T17:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.566170 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.566426 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.566514 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.566590 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.566669 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:45Z","lastTransitionTime":"2025-12-08T17:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.668963 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.669019 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.669030 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.669044 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.669053 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:45Z","lastTransitionTime":"2025-12-08T17:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.772260 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.772308 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.772317 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.772330 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.772339 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:45Z","lastTransitionTime":"2025-12-08T17:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.873894 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.873940 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.873953 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.873969 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.873979 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:45Z","lastTransitionTime":"2025-12-08T17:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.975943 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.975996 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.976014 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.976037 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:45 crc kubenswrapper[5112]: I1208 17:41:45.976056 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:45Z","lastTransitionTime":"2025-12-08T17:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.077950 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.078019 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.078039 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.078064 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.078125 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:46Z","lastTransitionTime":"2025-12-08T17:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.180427 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.180487 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.180505 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.180528 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.180548 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:46Z","lastTransitionTime":"2025-12-08T17:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.282628 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.282676 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.282687 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.282704 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.282714 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:46Z","lastTransitionTime":"2025-12-08T17:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.384949 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.385013 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.385033 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.385058 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.385120 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:46Z","lastTransitionTime":"2025-12-08T17:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.487386 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.487444 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.487455 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.487471 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.487486 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:46Z","lastTransitionTime":"2025-12-08T17:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.589742 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.589822 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.589849 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.589879 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.589918 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:46Z","lastTransitionTime":"2025-12-08T17:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.692067 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.692165 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.692182 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.692209 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.692237 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:46Z","lastTransitionTime":"2025-12-08T17:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.794381 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.794430 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.794439 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.794454 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.794470 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:46Z","lastTransitionTime":"2025-12-08T17:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.897539 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.897591 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.897602 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.897618 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:46 crc kubenswrapper[5112]: I1208 17:41:46.897631 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:46Z","lastTransitionTime":"2025-12-08T17:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.000230 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.000275 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.000285 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.000299 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.000311 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:47Z","lastTransitionTime":"2025-12-08T17:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.102117 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.102173 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.102191 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.102208 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.102220 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:47Z","lastTransitionTime":"2025-12-08T17:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.204505 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.204564 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.204582 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.204603 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.204619 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:47Z","lastTransitionTime":"2025-12-08T17:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.307182 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.307282 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.307309 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.307371 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.307391 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:47Z","lastTransitionTime":"2025-12-08T17:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.316052 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.316161 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:47 crc kubenswrapper[5112]: E1208 17:41:47.316284 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:47 crc kubenswrapper[5112]: E1208 17:41:47.316360 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.316470 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.316516 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:47 crc kubenswrapper[5112]: E1208 17:41:47.316581 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:47 crc kubenswrapper[5112]: E1208 17:41:47.316654 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.410156 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.410250 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.410266 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.410357 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.410372 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:47Z","lastTransitionTime":"2025-12-08T17:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.512816 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.512864 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.512877 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.512893 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.512905 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:47Z","lastTransitionTime":"2025-12-08T17:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.615876 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.615935 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.615946 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.615964 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.615973 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:47Z","lastTransitionTime":"2025-12-08T17:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.717461 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.717532 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.717559 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.717589 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.717620 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:47Z","lastTransitionTime":"2025-12-08T17:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.820140 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.820192 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.820205 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.820222 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.820234 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:47Z","lastTransitionTime":"2025-12-08T17:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.922530 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.922585 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.922598 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.922613 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:47 crc kubenswrapper[5112]: I1208 17:41:47.922623 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:47Z","lastTransitionTime":"2025-12-08T17:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.024579 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.024629 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.024638 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.024652 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.024661 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:48Z","lastTransitionTime":"2025-12-08T17:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.126863 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.126993 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.127014 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.127037 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.127056 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:48Z","lastTransitionTime":"2025-12-08T17:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.228747 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.228845 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.228883 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.228914 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.228935 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:48Z","lastTransitionTime":"2025-12-08T17:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.330859 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.330914 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.330927 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.330942 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.330953 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:48Z","lastTransitionTime":"2025-12-08T17:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.433283 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.433346 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.433361 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.433379 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.433394 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:48Z","lastTransitionTime":"2025-12-08T17:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.535445 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.535508 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.535523 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.535545 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.535559 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:48Z","lastTransitionTime":"2025-12-08T17:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.638300 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.638343 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.638351 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.638365 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.638375 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:48Z","lastTransitionTime":"2025-12-08T17:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.742416 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.742496 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.742515 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.742541 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.742561 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:48Z","lastTransitionTime":"2025-12-08T17:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.844882 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.845468 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.845482 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.845500 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.845512 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:48Z","lastTransitionTime":"2025-12-08T17:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.947960 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.947999 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.948009 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.948023 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:48 crc kubenswrapper[5112]: I1208 17:41:48.948033 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:48Z","lastTransitionTime":"2025-12-08T17:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.049696 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.049728 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.049737 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.049752 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.049764 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:49Z","lastTransitionTime":"2025-12-08T17:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.099690 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.099737 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.099758 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.099780 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.099889 5112 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.099906 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.099927 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.099941 5112 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.099941 5112 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.099965 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:57.099943098 +0000 UTC m=+94.109491799 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.099984 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:57.099973829 +0000 UTC m=+94.109522530 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.100003 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:57.099994969 +0000 UTC m=+94.109543670 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.100038 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.100053 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.100062 5112 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.100162 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:57.100151774 +0000 UTC m=+94.109700475 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.152272 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.152310 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.152319 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.152331 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.152339 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:49Z","lastTransitionTime":"2025-12-08T17:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.201426 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs\") pod \"network-metrics-daemon-7jq8h\" (UID: \"3c4fb553-8514-4194-847c-96d40f8b41e3\") " pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.201665 5112 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.201790 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs podName:3c4fb553-8514-4194-847c-96d40f8b41e3 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:57.201761067 +0000 UTC m=+94.211309768 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs") pod "network-metrics-daemon-7jq8h" (UID: "3c4fb553-8514-4194-847c-96d40f8b41e3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.254666 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.254773 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.254785 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.254802 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.254815 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:49Z","lastTransitionTime":"2025-12-08T17:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.302028 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.302344 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:41:57.302312043 +0000 UTC m=+94.311860784 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.316810 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.316934 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.316953 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.316858 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.317314 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.317389 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.317493 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:49 crc kubenswrapper[5112]: E1208 17:41:49.317250 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.357200 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.357260 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.357270 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.357285 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.357295 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:49Z","lastTransitionTime":"2025-12-08T17:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.459421 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.459690 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.459784 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.459887 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.459975 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:49Z","lastTransitionTime":"2025-12-08T17:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.562553 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.562607 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.562618 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.562635 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.562649 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:49Z","lastTransitionTime":"2025-12-08T17:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.664406 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.664479 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.664494 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.664515 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.664529 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:49Z","lastTransitionTime":"2025-12-08T17:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.766681 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.766720 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.766732 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.766747 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.766757 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:49Z","lastTransitionTime":"2025-12-08T17:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.868602 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.868652 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.868664 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.868680 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.868692 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:49Z","lastTransitionTime":"2025-12-08T17:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.970426 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.970474 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.970493 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.970509 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:49 crc kubenswrapper[5112]: I1208 17:41:49.970520 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:49Z","lastTransitionTime":"2025-12-08T17:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.072229 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.072280 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.072290 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.072304 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.072314 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:50Z","lastTransitionTime":"2025-12-08T17:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.174974 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.175042 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.175065 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.175120 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.175137 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:50Z","lastTransitionTime":"2025-12-08T17:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.277829 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.277873 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.277884 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.277903 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.277913 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:50Z","lastTransitionTime":"2025-12-08T17:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.380106 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.380150 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.380162 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.380179 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.380191 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:50Z","lastTransitionTime":"2025-12-08T17:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.481857 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.481907 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.481919 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.481935 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.481947 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:50Z","lastTransitionTime":"2025-12-08T17:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.584428 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.584670 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.584689 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.584704 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.584714 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:50Z","lastTransitionTime":"2025-12-08T17:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.686668 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.686741 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.686762 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.686786 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.686802 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:50Z","lastTransitionTime":"2025-12-08T17:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.789365 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.789420 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.789432 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.789454 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.789466 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:50Z","lastTransitionTime":"2025-12-08T17:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.891300 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.891341 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.891350 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.891365 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.891374 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:50Z","lastTransitionTime":"2025-12-08T17:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.993989 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.994032 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.994041 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.994053 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:50 crc kubenswrapper[5112]: I1208 17:41:50.994063 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:50Z","lastTransitionTime":"2025-12-08T17:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.096220 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.096304 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.096328 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.096342 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.096351 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:51Z","lastTransitionTime":"2025-12-08T17:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.198859 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.199399 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.199512 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.199603 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.199686 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:51Z","lastTransitionTime":"2025-12-08T17:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.301967 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.302028 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.302042 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.302062 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.302078 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:51Z","lastTransitionTime":"2025-12-08T17:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.316820 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.316841 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:51 crc kubenswrapper[5112]: E1208 17:41:51.317011 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.317029 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:51 crc kubenswrapper[5112]: E1208 17:41:51.317331 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.317534 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:51 crc kubenswrapper[5112]: E1208 17:41:51.317522 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:41:51 crc kubenswrapper[5112]: E1208 17:41:51.317657 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.404549 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.404626 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.404640 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.404660 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.404672 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:51Z","lastTransitionTime":"2025-12-08T17:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.507192 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.507252 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.507267 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.507284 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.507299 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:51Z","lastTransitionTime":"2025-12-08T17:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.609493 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.609540 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.609551 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.609565 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.609574 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:51Z","lastTransitionTime":"2025-12-08T17:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.712241 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.712327 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.712342 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.712364 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.712381 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:51Z","lastTransitionTime":"2025-12-08T17:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.814838 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.814911 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.814928 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.814953 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.814973 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:51Z","lastTransitionTime":"2025-12-08T17:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.917723 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.917763 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.917771 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.917786 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.917796 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:51Z","lastTransitionTime":"2025-12-08T17:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:51 crc kubenswrapper[5112]: I1208 17:41:51.965251 5112 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.019612 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.019671 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.019683 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.019702 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.019715 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.122695 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.122792 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.122810 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.122830 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.122845 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.225892 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.225959 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.225981 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.226013 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.226035 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.328389 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.328436 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.328445 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.328458 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.328467 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.430930 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.430985 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.430999 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.431018 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.431030 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.533745 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.533794 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.533807 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.533822 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.533834 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.635760 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.635817 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.635829 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.635846 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.635856 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.737904 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.737972 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.737984 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.738009 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.738022 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.840193 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.840264 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.840283 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.840305 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.840322 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.942680 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.942948 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.942969 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.942990 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5112]: I1208 17:41:52.943008 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.045488 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.045548 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.045561 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.045579 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.045592 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.148007 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.148050 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.148062 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.148099 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.148112 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.249931 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.249975 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.249986 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.250002 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.250010 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.316127 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.316250 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.316331 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.316809 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.316846 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.316862 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.317010 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.317101 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.318176 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:53 crc kubenswrapper[5112]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:53 crc kubenswrapper[5112]: set -o allexport Dec 08 17:41:53 crc kubenswrapper[5112]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 17:41:53 crc kubenswrapper[5112]: source /etc/kubernetes/apiserver-url.env Dec 08 17:41:53 crc kubenswrapper[5112]: else Dec 08 17:41:53 crc kubenswrapper[5112]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 17:41:53 crc kubenswrapper[5112]: exit 1 Dec 08 17:41:53 crc kubenswrapper[5112]: fi Dec 08 17:41:53 crc kubenswrapper[5112]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 17:41:53 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:53 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.318231 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:53 crc kubenswrapper[5112]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 17:41:53 crc kubenswrapper[5112]: while [ true ]; Dec 08 17:41:53 crc kubenswrapper[5112]: do Dec 08 17:41:53 crc kubenswrapper[5112]: for f in $(ls /tmp/serviceca); do Dec 08 17:41:53 crc kubenswrapper[5112]: echo $f Dec 08 17:41:53 crc kubenswrapper[5112]: ca_file_path="/tmp/serviceca/${f}" Dec 08 17:41:53 crc kubenswrapper[5112]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 17:41:53 crc kubenswrapper[5112]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 17:41:53 crc kubenswrapper[5112]: if [ -e "${reg_dir_path}" ]; then Dec 08 17:41:53 crc kubenswrapper[5112]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 17:41:53 crc kubenswrapper[5112]: else Dec 08 17:41:53 crc kubenswrapper[5112]: mkdir $reg_dir_path Dec 08 17:41:53 crc kubenswrapper[5112]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 17:41:53 crc kubenswrapper[5112]: fi Dec 08 17:41:53 crc kubenswrapper[5112]: done Dec 08 17:41:53 crc kubenswrapper[5112]: for d in $(ls /etc/docker/certs.d); do Dec 08 17:41:53 crc kubenswrapper[5112]: echo $d Dec 08 17:41:53 crc kubenswrapper[5112]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 17:41:53 crc kubenswrapper[5112]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 17:41:53 crc kubenswrapper[5112]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 17:41:53 crc kubenswrapper[5112]: rm -rf /etc/docker/certs.d/$d Dec 08 17:41:53 crc kubenswrapper[5112]: fi Dec 08 17:41:53 crc kubenswrapper[5112]: done Dec 08 17:41:53 crc kubenswrapper[5112]: sleep 60 & wait ${!} Dec 08 17:41:53 crc kubenswrapper[5112]: done Dec 08 17:41:53 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88g7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-4hrlr_openshift-image-registry(5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:53 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.319371 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.319416 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-4hrlr" podUID="5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.326483 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.334571 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.341126 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.350312 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.351513 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.351551 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.351562 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.351581 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.351592 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.357233 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.371525 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.381386 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.392032 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.401115 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.410392 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.422894 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.431874 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.452960 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.453019 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.453031 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.453055 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.453067 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.455090 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.466817 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.476897 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.486486 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.496061 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.507258 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.514262 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.555425 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.555500 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.555513 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.555528 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.555542 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.581917 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.581977 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.581989 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.582006 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.582021 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.592392 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.595495 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.595545 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.595553 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.595566 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.595576 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.605743 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.609578 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.609621 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.609692 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.609711 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.609724 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.623457 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.627315 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.627355 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.627373 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.627388 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.627398 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.636692 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.639703 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.639811 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.639838 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.639867 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.639891 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.650849 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5112]: E1208 17:41:53.651035 5112 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.657126 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.657202 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.657221 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.657239 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.657278 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.759395 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.759450 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.759459 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.759475 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.759486 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.862334 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.862392 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.862405 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.862428 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.862441 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.965037 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.965093 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.965102 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.965116 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5112]: I1208 17:41:53.965125 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.066777 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.066828 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.066839 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.066855 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.066868 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.168884 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.168942 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.168954 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.168973 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.168989 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.271169 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.271240 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.271252 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.271277 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.271296 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.318499 5112 scope.go:117] "RemoveContainer" containerID="7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59" Dec 08 17:41:54 crc kubenswrapper[5112]: E1208 17:41:54.318702 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:54 crc kubenswrapper[5112]: E1208 17:41:54.318924 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:54 crc kubenswrapper[5112]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:54 crc kubenswrapper[5112]: set -uo pipefail Dec 08 17:41:54 crc kubenswrapper[5112]: Dec 08 17:41:54 crc kubenswrapper[5112]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 17:41:54 crc kubenswrapper[5112]: Dec 08 17:41:54 crc kubenswrapper[5112]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 17:41:54 crc kubenswrapper[5112]: HOSTS_FILE="/etc/hosts" Dec 08 17:41:54 crc kubenswrapper[5112]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 17:41:54 crc kubenswrapper[5112]: Dec 08 17:41:54 crc kubenswrapper[5112]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 17:41:54 crc kubenswrapper[5112]: Dec 08 17:41:54 crc kubenswrapper[5112]: # Make a temporary file with the old hosts file's attributes. Dec 08 17:41:54 crc kubenswrapper[5112]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 17:41:54 crc kubenswrapper[5112]: echo "Failed to preserve hosts file. Exiting." Dec 08 17:41:54 crc kubenswrapper[5112]: exit 1 Dec 08 17:41:54 crc kubenswrapper[5112]: fi Dec 08 17:41:54 crc kubenswrapper[5112]: Dec 08 17:41:54 crc kubenswrapper[5112]: while true; do Dec 08 17:41:54 crc kubenswrapper[5112]: declare -A svc_ips Dec 08 17:41:54 crc kubenswrapper[5112]: for svc in "${services[@]}"; do Dec 08 17:41:54 crc kubenswrapper[5112]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 17:41:54 crc kubenswrapper[5112]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 17:41:54 crc kubenswrapper[5112]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 17:41:54 crc kubenswrapper[5112]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 17:41:54 crc kubenswrapper[5112]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:54 crc kubenswrapper[5112]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:54 crc kubenswrapper[5112]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:54 crc kubenswrapper[5112]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 17:41:54 crc kubenswrapper[5112]: for i in ${!cmds[*]} Dec 08 17:41:54 crc kubenswrapper[5112]: do Dec 08 17:41:54 crc kubenswrapper[5112]: ips=($(eval "${cmds[i]}")) Dec 08 17:41:54 crc kubenswrapper[5112]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 17:41:54 crc kubenswrapper[5112]: svc_ips["${svc}"]="${ips[@]}" Dec 08 17:41:54 crc kubenswrapper[5112]: break Dec 08 17:41:54 crc kubenswrapper[5112]: fi Dec 08 17:41:54 crc kubenswrapper[5112]: done Dec 08 17:41:54 crc kubenswrapper[5112]: done Dec 08 17:41:54 crc kubenswrapper[5112]: Dec 08 17:41:54 crc kubenswrapper[5112]: # Update /etc/hosts only if we get valid service IPs Dec 08 17:41:54 crc kubenswrapper[5112]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 17:41:54 crc kubenswrapper[5112]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 17:41:54 crc kubenswrapper[5112]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 17:41:54 crc kubenswrapper[5112]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 17:41:54 crc kubenswrapper[5112]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 17:41:54 crc kubenswrapper[5112]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 17:41:54 crc kubenswrapper[5112]: sleep 60 & wait Dec 08 17:41:54 crc kubenswrapper[5112]: continue Dec 08 17:41:54 crc kubenswrapper[5112]: fi Dec 08 17:41:54 crc kubenswrapper[5112]: Dec 08 17:41:54 crc kubenswrapper[5112]: # Append resolver entries for services Dec 08 17:41:54 crc kubenswrapper[5112]: rc=0 Dec 08 17:41:54 crc kubenswrapper[5112]: for svc in "${!svc_ips[@]}"; do Dec 08 17:41:54 crc kubenswrapper[5112]: for ip in ${svc_ips[${svc}]}; do Dec 08 17:41:54 crc kubenswrapper[5112]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 17:41:54 crc kubenswrapper[5112]: done Dec 08 17:41:54 crc kubenswrapper[5112]: done Dec 08 17:41:54 crc kubenswrapper[5112]: if [[ $rc -ne 0 ]]; then Dec 08 17:41:54 crc kubenswrapper[5112]: sleep 60 & wait Dec 08 17:41:54 crc kubenswrapper[5112]: continue Dec 08 17:41:54 crc kubenswrapper[5112]: fi Dec 08 17:41:54 crc kubenswrapper[5112]: Dec 08 17:41:54 crc kubenswrapper[5112]: Dec 08 17:41:54 crc kubenswrapper[5112]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 17:41:54 crc kubenswrapper[5112]: # Replace /etc/hosts with our modified version if needed Dec 08 17:41:54 crc kubenswrapper[5112]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 17:41:54 crc kubenswrapper[5112]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 17:41:54 crc kubenswrapper[5112]: fi Dec 08 17:41:54 crc kubenswrapper[5112]: sleep 60 & wait Dec 08 17:41:54 crc kubenswrapper[5112]: unset svc_ips Dec 08 17:41:54 crc kubenswrapper[5112]: done Dec 08 17:41:54 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pm48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-rsc28_openshift-dns(a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:54 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:54 crc kubenswrapper[5112]: E1208 17:41:54.319028 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:54 crc kubenswrapper[5112]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:54 crc kubenswrapper[5112]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:54 crc kubenswrapper[5112]: set -o allexport Dec 08 17:41:54 crc kubenswrapper[5112]: source "/env/_master" Dec 08 17:41:54 crc kubenswrapper[5112]: set +o allexport Dec 08 17:41:54 crc kubenswrapper[5112]: fi Dec 08 17:41:54 crc kubenswrapper[5112]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 17:41:54 crc kubenswrapper[5112]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 17:41:54 crc kubenswrapper[5112]: ho_enable="--enable-hybrid-overlay" Dec 08 17:41:54 crc kubenswrapper[5112]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 17:41:54 crc kubenswrapper[5112]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 17:41:54 crc kubenswrapper[5112]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 17:41:54 crc kubenswrapper[5112]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 17:41:54 crc kubenswrapper[5112]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 17:41:54 crc kubenswrapper[5112]: --webhook-host=127.0.0.1 \ Dec 08 17:41:54 crc kubenswrapper[5112]: --webhook-port=9743 \ Dec 08 17:41:54 crc kubenswrapper[5112]: ${ho_enable} \ Dec 08 17:41:54 crc kubenswrapper[5112]: --enable-interconnect \ Dec 08 17:41:54 crc kubenswrapper[5112]: --disable-approver \ Dec 08 17:41:54 crc kubenswrapper[5112]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 17:41:54 crc kubenswrapper[5112]: --wait-for-kubernetes-api=200s \ Dec 08 17:41:54 crc kubenswrapper[5112]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 17:41:54 crc kubenswrapper[5112]: --loglevel="${LOGLEVEL}" Dec 08 17:41:54 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:54 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:54 crc kubenswrapper[5112]: E1208 17:41:54.320121 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-rsc28" podUID="a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b" Dec 08 17:41:54 crc kubenswrapper[5112]: E1208 17:41:54.320983 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:54 crc kubenswrapper[5112]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:54 crc kubenswrapper[5112]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:54 crc kubenswrapper[5112]: set -o allexport Dec 08 17:41:54 crc kubenswrapper[5112]: source "/env/_master" Dec 08 17:41:54 crc kubenswrapper[5112]: set +o allexport Dec 08 17:41:54 crc kubenswrapper[5112]: fi Dec 08 17:41:54 crc kubenswrapper[5112]: Dec 08 17:41:54 crc kubenswrapper[5112]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 17:41:54 crc kubenswrapper[5112]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 17:41:54 crc kubenswrapper[5112]: --disable-webhook \ Dec 08 17:41:54 crc kubenswrapper[5112]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 17:41:54 crc kubenswrapper[5112]: --loglevel="${LOGLEVEL}" Dec 08 17:41:54 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:54 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:54 crc kubenswrapper[5112]: E1208 17:41:54.322180 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.373945 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.374016 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.374028 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.374044 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.374053 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.477389 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.477456 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.477467 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.477498 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.477513 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.580692 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.580773 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.580792 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.580816 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.580832 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.683065 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.683161 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.683183 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.683205 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.683220 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.785305 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.785362 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.785381 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.785403 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.785418 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.888549 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.888654 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.888676 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.888771 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.888790 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.991006 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.991047 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.991058 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.991074 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5112]: I1208 17:41:54.991107 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.092332 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.092376 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.092389 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.092405 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.092417 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.194760 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.194803 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.194812 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.194826 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.194837 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.296508 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.296587 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.296607 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.296627 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.296642 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.316454 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.316517 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:55 crc kubenswrapper[5112]: E1208 17:41:55.317335 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.317398 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:55 crc kubenswrapper[5112]: E1208 17:41:55.317602 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.317644 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:55 crc kubenswrapper[5112]: E1208 17:41:55.317782 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:41:55 crc kubenswrapper[5112]: E1208 17:41:55.317902 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:55 crc kubenswrapper[5112]: E1208 17:41:55.319693 5112 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:55 crc kubenswrapper[5112]: E1208 17:41:55.320042 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:55 crc kubenswrapper[5112]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 17:41:55 crc kubenswrapper[5112]: apiVersion: v1 Dec 08 17:41:55 crc kubenswrapper[5112]: clusters: Dec 08 17:41:55 crc kubenswrapper[5112]: - cluster: Dec 08 17:41:55 crc kubenswrapper[5112]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 17:41:55 crc kubenswrapper[5112]: server: https://api-int.crc.testing:6443 Dec 08 17:41:55 crc kubenswrapper[5112]: name: default-cluster Dec 08 17:41:55 crc kubenswrapper[5112]: contexts: Dec 08 17:41:55 crc kubenswrapper[5112]: - context: Dec 08 17:41:55 crc kubenswrapper[5112]: cluster: default-cluster Dec 08 17:41:55 crc kubenswrapper[5112]: namespace: default Dec 08 17:41:55 crc kubenswrapper[5112]: user: default-auth Dec 08 17:41:55 crc kubenswrapper[5112]: name: default-context Dec 08 17:41:55 crc kubenswrapper[5112]: current-context: default-context Dec 08 17:41:55 crc kubenswrapper[5112]: kind: Config Dec 08 17:41:55 crc kubenswrapper[5112]: preferences: {} Dec 08 17:41:55 crc kubenswrapper[5112]: users: Dec 08 17:41:55 crc kubenswrapper[5112]: - name: default-auth Dec 08 17:41:55 crc kubenswrapper[5112]: user: Dec 08 17:41:55 crc kubenswrapper[5112]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 17:41:55 crc kubenswrapper[5112]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 17:41:55 crc kubenswrapper[5112]: EOF Dec 08 17:41:55 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vcrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-ng27z_openshift-ovn-kubernetes(0510de3f-316a-4902-a746-a746c3ce594c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:55 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:55 crc kubenswrapper[5112]: E1208 17:41:55.321012 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 17:41:55 crc kubenswrapper[5112]: E1208 17:41:55.321143 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:55 crc kubenswrapper[5112]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:55 crc kubenswrapper[5112]: set -euo pipefail Dec 08 17:41:55 crc kubenswrapper[5112]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 17:41:55 crc kubenswrapper[5112]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 17:41:55 crc kubenswrapper[5112]: # As the secret mount is optional we must wait for the files to be present. Dec 08 17:41:55 crc kubenswrapper[5112]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 17:41:55 crc kubenswrapper[5112]: TS=$(date +%s) Dec 08 17:41:55 crc kubenswrapper[5112]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 17:41:55 crc kubenswrapper[5112]: HAS_LOGGED_INFO=0 Dec 08 17:41:55 crc kubenswrapper[5112]: Dec 08 17:41:55 crc kubenswrapper[5112]: log_missing_certs(){ Dec 08 17:41:55 crc kubenswrapper[5112]: CUR_TS=$(date +%s) Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 17:41:55 crc kubenswrapper[5112]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 17:41:55 crc kubenswrapper[5112]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 17:41:55 crc kubenswrapper[5112]: HAS_LOGGED_INFO=1 Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: } Dec 08 17:41:55 crc kubenswrapper[5112]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 17:41:55 crc kubenswrapper[5112]: log_missing_certs Dec 08 17:41:55 crc kubenswrapper[5112]: sleep 5 Dec 08 17:41:55 crc kubenswrapper[5112]: done Dec 08 17:41:55 crc kubenswrapper[5112]: Dec 08 17:41:55 crc kubenswrapper[5112]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 17:41:55 crc kubenswrapper[5112]: exec /usr/bin/kube-rbac-proxy \ Dec 08 17:41:55 crc kubenswrapper[5112]: --logtostderr \ Dec 08 17:41:55 crc kubenswrapper[5112]: --secure-listen-address=:9108 \ Dec 08 17:41:55 crc kubenswrapper[5112]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 17:41:55 crc kubenswrapper[5112]: --upstream=http://127.0.0.1:29108/ \ Dec 08 17:41:55 crc kubenswrapper[5112]: --tls-private-key-file=${TLS_PK} \ Dec 08 17:41:55 crc kubenswrapper[5112]: --tls-cert-file=${TLS_CERT} Dec 08 17:41:55 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv8p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-b7fmf_openshift-ovn-kubernetes(472d4dbe-4674-43ba-98da-98502eccb960): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:55 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:55 crc kubenswrapper[5112]: E1208 17:41:55.321168 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" podUID="0510de3f-316a-4902-a746-a746c3ce594c" Dec 08 17:41:55 crc kubenswrapper[5112]: E1208 17:41:55.323942 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:55 crc kubenswrapper[5112]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: set -o allexport Dec 08 17:41:55 crc kubenswrapper[5112]: source "/env/_master" Dec 08 17:41:55 crc kubenswrapper[5112]: set +o allexport Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: Dec 08 17:41:55 crc kubenswrapper[5112]: ovn_v4_join_subnet_opt= Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ "" != "" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: ovn_v6_join_subnet_opt= Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ "" != "" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: Dec 08 17:41:55 crc kubenswrapper[5112]: ovn_v4_transit_switch_subnet_opt= Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ "" != "" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: ovn_v6_transit_switch_subnet_opt= Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ "" != "" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: Dec 08 17:41:55 crc kubenswrapper[5112]: dns_name_resolver_enabled_flag= Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ "false" == "true" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: Dec 08 17:41:55 crc kubenswrapper[5112]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 17:41:55 crc kubenswrapper[5112]: Dec 08 17:41:55 crc kubenswrapper[5112]: # This is needed so that converting clusters from GA to TP Dec 08 17:41:55 crc kubenswrapper[5112]: # will rollout control plane pods as well Dec 08 17:41:55 crc kubenswrapper[5112]: network_segmentation_enabled_flag= Dec 08 17:41:55 crc kubenswrapper[5112]: multi_network_enabled_flag= Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ "true" == "true" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: multi_network_enabled_flag="--enable-multi-network" Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ "true" == "true" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ "true" != "true" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: multi_network_enabled_flag="--enable-multi-network" Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: Dec 08 17:41:55 crc kubenswrapper[5112]: route_advertisements_enable_flag= Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ "false" == "true" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: Dec 08 17:41:55 crc kubenswrapper[5112]: preconfigured_udn_addresses_enable_flag= Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ "false" == "true" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: Dec 08 17:41:55 crc kubenswrapper[5112]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 17:41:55 crc kubenswrapper[5112]: multi_network_policy_enabled_flag= Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ "false" == "true" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: Dec 08 17:41:55 crc kubenswrapper[5112]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 17:41:55 crc kubenswrapper[5112]: admin_network_policy_enabled_flag= Dec 08 17:41:55 crc kubenswrapper[5112]: if [[ "true" == "true" ]]; then Dec 08 17:41:55 crc kubenswrapper[5112]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: Dec 08 17:41:55 crc kubenswrapper[5112]: if [ "shared" == "shared" ]; then Dec 08 17:41:55 crc kubenswrapper[5112]: gateway_mode_flags="--gateway-mode shared" Dec 08 17:41:55 crc kubenswrapper[5112]: elif [ "shared" == "local" ]; then Dec 08 17:41:55 crc kubenswrapper[5112]: gateway_mode_flags="--gateway-mode local" Dec 08 17:41:55 crc kubenswrapper[5112]: else Dec 08 17:41:55 crc kubenswrapper[5112]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 17:41:55 crc kubenswrapper[5112]: exit 1 Dec 08 17:41:55 crc kubenswrapper[5112]: fi Dec 08 17:41:55 crc kubenswrapper[5112]: Dec 08 17:41:55 crc kubenswrapper[5112]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 17:41:55 crc kubenswrapper[5112]: exec /usr/bin/ovnkube \ Dec 08 17:41:55 crc kubenswrapper[5112]: --enable-interconnect \ Dec 08 17:41:55 crc kubenswrapper[5112]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 17:41:55 crc kubenswrapper[5112]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 17:41:55 crc kubenswrapper[5112]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 17:41:55 crc kubenswrapper[5112]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 17:41:55 crc kubenswrapper[5112]: --metrics-enable-pprof \ Dec 08 17:41:55 crc kubenswrapper[5112]: --metrics-enable-config-duration \ Dec 08 17:41:55 crc kubenswrapper[5112]: ${ovn_v4_join_subnet_opt} \ Dec 08 17:41:55 crc kubenswrapper[5112]: ${ovn_v6_join_subnet_opt} \ Dec 08 17:41:55 crc kubenswrapper[5112]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 17:41:55 crc kubenswrapper[5112]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 17:41:55 crc kubenswrapper[5112]: ${dns_name_resolver_enabled_flag} \ Dec 08 17:41:55 crc kubenswrapper[5112]: ${persistent_ips_enabled_flag} \ Dec 08 17:41:55 crc kubenswrapper[5112]: ${multi_network_enabled_flag} \ Dec 08 17:41:55 crc kubenswrapper[5112]: ${network_segmentation_enabled_flag} \ Dec 08 17:41:55 crc kubenswrapper[5112]: ${gateway_mode_flags} \ Dec 08 17:41:55 crc kubenswrapper[5112]: ${route_advertisements_enable_flag} \ Dec 08 17:41:55 crc kubenswrapper[5112]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 17:41:55 crc kubenswrapper[5112]: --enable-egress-ip=true \ Dec 08 17:41:55 crc kubenswrapper[5112]: --enable-egress-firewall=true \ Dec 08 17:41:55 crc kubenswrapper[5112]: --enable-egress-qos=true \ Dec 08 17:41:55 crc kubenswrapper[5112]: --enable-egress-service=true \ Dec 08 17:41:55 crc kubenswrapper[5112]: --enable-multicast \ Dec 08 17:41:55 crc kubenswrapper[5112]: --enable-multi-external-gateway=true \ Dec 08 17:41:55 crc kubenswrapper[5112]: ${multi_network_policy_enabled_flag} \ Dec 08 17:41:55 crc kubenswrapper[5112]: ${admin_network_policy_enabled_flag} Dec 08 17:41:55 crc kubenswrapper[5112]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sv8p6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-b7fmf_openshift-ovn-kubernetes(472d4dbe-4674-43ba-98da-98502eccb960): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:55 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:55 crc kubenswrapper[5112]: E1208 17:41:55.325188 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" podUID="472d4dbe-4674-43ba-98da-98502eccb960" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.399896 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.399991 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.400012 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.400483 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.400525 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.503330 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.503414 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.503438 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.503468 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.503491 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.606334 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.606418 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.606436 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.606465 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.606484 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.708589 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.708633 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.708644 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.708657 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.708667 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.810654 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.810759 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.810774 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.810796 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.810811 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.912511 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.912547 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.912558 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.912573 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5112]: I1208 17:41:55.912587 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.014381 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.014449 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.014464 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.014485 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.014497 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.117054 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.117223 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.117244 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.117265 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.117281 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.219472 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.219541 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.219572 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.219592 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.219603 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5112]: E1208 17:41:56.318402 5112 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c98z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-9xjh5_openshift-multus(575dcc54-1cfa-45ab-8c22-087fcf27f142): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:56 crc kubenswrapper[5112]: E1208 17:41:56.318568 5112 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:56 crc kubenswrapper[5112]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 17:41:56 crc kubenswrapper[5112]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 17:41:56 crc kubenswrapper[5112]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbcf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-kvv4v_openshift-multus(288ee203-be3f-4176-90b2-7d95ee47aee8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:56 crc kubenswrapper[5112]: > logger="UnhandledError" Dec 08 17:41:56 crc kubenswrapper[5112]: E1208 17:41:56.318865 5112 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56lk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-s6wzf_openshift-machine-config-operator(95e46da0-94bb-4d22-804b-b3018984cdac): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:56 crc kubenswrapper[5112]: E1208 17:41:56.319796 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" podUID="575dcc54-1cfa-45ab-8c22-087fcf27f142" Dec 08 17:41:56 crc kubenswrapper[5112]: E1208 17:41:56.319849 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-kvv4v" podUID="288ee203-be3f-4176-90b2-7d95ee47aee8" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.321059 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.321136 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.321155 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.321175 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.321191 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5112]: E1208 17:41:56.321448 5112 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56lk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-s6wzf_openshift-machine-config-operator(95e46da0-94bb-4d22-804b-b3018984cdac): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:56 crc kubenswrapper[5112]: E1208 17:41:56.322704 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.423709 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.423754 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.423765 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.423779 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.423788 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.525826 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.525872 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.525880 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.525894 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.525902 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.627783 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.627836 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.627852 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.627874 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.627889 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.730260 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.730319 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.730343 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.730372 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.730395 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.832476 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.832520 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.832529 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.832541 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.832551 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.935667 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.935781 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.935802 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.935863 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5112]: I1208 17:41:56.935883 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.039288 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.039361 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.039381 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.039410 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.039429 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.141993 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.142046 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.142061 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.142101 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.142120 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.194443 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.194527 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.194573 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.194608 5112 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.194666 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.194745 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:13.194705385 +0000 UTC m=+110.204254126 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.194813 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.194832 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.194833 5112 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.194838 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.194972 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:13.194933221 +0000 UTC m=+110.204481952 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.194974 5112 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.195041 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:13.195029494 +0000 UTC m=+110.204578225 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.194854 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.195125 5112 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.195167 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:13.195155467 +0000 UTC m=+110.204704198 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.235947 5112 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.245700 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.245821 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.245851 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.245884 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.245909 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.296237 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs\") pod \"network-metrics-daemon-7jq8h\" (UID: \"3c4fb553-8514-4194-847c-96d40f8b41e3\") " pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.296503 5112 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.296601 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs podName:3c4fb553-8514-4194-847c-96d40f8b41e3 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:13.296579276 +0000 UTC m=+110.306127987 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs") pod "network-metrics-daemon-7jq8h" (UID: "3c4fb553-8514-4194-847c-96d40f8b41e3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.315866 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.315953 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.315965 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.316118 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.316127 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.316286 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.316342 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.316428 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.348374 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.348427 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.348443 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.348464 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.348479 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.397507 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:57 crc kubenswrapper[5112]: E1208 17:41:57.397682 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:13.397641465 +0000 UTC m=+110.407190166 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.450063 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.450126 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.450150 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.450169 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.450180 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.552313 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.552359 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.552371 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.552387 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.552400 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.655293 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.655382 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.655408 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.655438 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.655460 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.758205 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.758284 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.758297 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.758316 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.758328 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.860727 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.860773 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.860785 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.860804 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.860816 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.963535 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.963611 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.963634 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.963662 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5112]: I1208 17:41:57.963681 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.066175 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.066221 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.066234 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.066250 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.066262 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.168531 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.168646 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.168676 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.168707 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.168729 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.271579 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.271634 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.271652 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.271671 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.271687 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.375027 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.375154 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.375182 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.375214 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.375239 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.477657 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.477702 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.477711 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.477727 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.477737 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.579359 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.579575 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.579661 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.579778 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.579841 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.681811 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.681854 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.681867 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.681885 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.681898 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.784169 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.784241 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.784267 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.784295 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.784317 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.886579 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.886630 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.886650 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.886670 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.886682 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.989073 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.989129 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.989137 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.989148 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5112]: I1208 17:41:58.989158 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.093333 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.093555 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.093611 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.093709 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.093768 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.196137 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.196209 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.196233 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.196251 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.196264 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.298957 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.299002 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.299013 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.299045 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.299054 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.316160 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:59 crc kubenswrapper[5112]: E1208 17:41:59.316327 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.316407 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.316450 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:41:59 crc kubenswrapper[5112]: E1208 17:41:59.316510 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.316396 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:59 crc kubenswrapper[5112]: E1208 17:41:59.316662 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:41:59 crc kubenswrapper[5112]: E1208 17:41:59.316756 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.400668 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.400703 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.400713 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.400725 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.400735 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.502816 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.502871 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.502883 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.502902 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.502917 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.604932 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.604972 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.604980 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.604992 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.605001 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.707373 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.707436 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.707449 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.707465 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.707477 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.810024 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.810067 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.810099 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.810115 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.810126 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.912606 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.912679 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.912697 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.912729 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5112]: I1208 17:41:59.912753 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.015120 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.015169 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.015180 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.015196 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.015208 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.117642 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.117689 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.117701 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.117719 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.117730 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.220112 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.220193 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.220206 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.220220 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.220229 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.322181 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.322253 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.322270 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.322294 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.322312 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.424285 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.424336 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.424349 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.424367 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.424379 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.526585 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.526628 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.526641 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.526657 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.526669 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.628892 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.628935 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.628945 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.628962 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.628974 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.731048 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.731122 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.731136 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.731151 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.731162 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.832751 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.832808 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.832824 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.832845 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.832860 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.935183 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.935299 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.935320 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.935344 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5112]: I1208 17:42:00.935357 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.037065 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.037134 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.037148 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.037166 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.037178 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.138678 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.138745 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.138777 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.138795 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.138807 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.240691 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.240739 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.240750 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.240765 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.240774 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.322403 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.322478 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.322409 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:01 crc kubenswrapper[5112]: E1208 17:42:01.322628 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:01 crc kubenswrapper[5112]: E1208 17:42:01.322753 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:01 crc kubenswrapper[5112]: E1208 17:42:01.322859 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.322897 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:01 crc kubenswrapper[5112]: E1208 17:42:01.323251 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.342891 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.342947 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.342965 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.342987 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.343005 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.444870 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.444908 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.444919 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.444933 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.444942 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.547545 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.547584 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.547593 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.547606 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.547616 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.649956 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.650005 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.650017 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.650036 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.650049 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.751227 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.751265 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.751274 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.751289 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.751298 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.853578 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.853634 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.853651 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.853673 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.853692 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.956074 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.956165 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.956183 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.956208 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5112]: I1208 17:42:01.956225 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.058269 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.058346 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.058372 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.058403 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.058427 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.160462 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.160520 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.160533 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.160553 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.160568 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.262155 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.262208 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.262220 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.262240 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.262254 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.364712 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.365415 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.365452 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.365471 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.365482 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.467881 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.467925 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.467934 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.467950 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.467959 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.570093 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.570136 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.570144 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.570159 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.570167 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.672209 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.672288 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.672312 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.672341 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.672364 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.774290 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.774338 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.774349 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.774366 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.774378 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.877521 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.877611 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.877638 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.877675 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.877698 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.979566 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.979611 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.979629 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.979658 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5112]: I1208 17:42:02.979672 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.082156 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.082242 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.082269 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.082301 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.082325 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.184485 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.184577 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.184604 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.184632 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.184651 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.286951 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.287018 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.287028 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.287043 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.287052 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.315900 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:03 crc kubenswrapper[5112]: E1208 17:42:03.316068 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.316166 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.316219 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:03 crc kubenswrapper[5112]: E1208 17:42:03.316363 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:03 crc kubenswrapper[5112]: E1208 17:42:03.316420 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.316439 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:03 crc kubenswrapper[5112]: E1208 17:42:03.316521 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.338438 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.350401 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.364796 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.373858 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.384781 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.389008 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.389066 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.389114 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.389140 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.389159 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.407280 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.418879 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.435138 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.446468 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.460948 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.480247 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.488574 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.490772 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.490826 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.490844 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.490868 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.490889 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.499717 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.511076 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.521353 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.533364 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.544500 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.566963 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.581364 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.592784 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.592818 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.592828 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.592842 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.592852 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.695592 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.695754 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.695775 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.695803 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.695820 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.774226 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.774276 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.774297 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.774364 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.774384 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: E1208 17:42:03.791423 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.795652 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.795728 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.795747 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.795782 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.795802 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: E1208 17:42:03.811074 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.815172 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.815220 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.815239 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.815260 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.815277 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: E1208 17:42:03.830357 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.833726 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.833782 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.833796 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.833816 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.833831 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: E1208 17:42:03.848319 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.851522 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.851594 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.851613 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.851633 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.851649 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: E1208 17:42:03.865156 5112 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1bfc9941-22f6-447c-a313-68da2bceb39a\\\",\\\"systemUUID\\\":\\\"b5fe6617-167d-4502-9bb8-e694c6fec87c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5112]: E1208 17:42:03.865320 5112 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.866766 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.866820 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.866833 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.866850 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.866862 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.969135 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.969276 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.969303 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.969333 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5112]: I1208 17:42:03.969353 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.072529 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.072634 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.072660 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.072692 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.072717 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.175660 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.175752 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.175773 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.175801 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.175820 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.279104 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.279155 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.279167 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.279185 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.279198 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.381693 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.381779 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.381806 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.381839 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.381863 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.484603 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.484673 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.484691 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.484718 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.484734 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.586809 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.586865 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.586882 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.586903 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.586921 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.688409 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.688805 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.688824 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.688846 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.688866 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.791346 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.791423 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.791443 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.791469 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.791491 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.893640 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.893685 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.893695 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.893712 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.893721 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.996000 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.996045 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.996055 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.996140 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5112]: I1208 17:42:04.996156 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.098773 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.098820 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.098831 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.098848 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.098860 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.201476 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.201519 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.201530 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.201545 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.201557 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.303246 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.303305 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.303316 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.303330 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.303340 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.315972 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.316036 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.316145 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:05 crc kubenswrapper[5112]: E1208 17:42:05.316151 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:05 crc kubenswrapper[5112]: E1208 17:42:05.316337 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:05 crc kubenswrapper[5112]: E1208 17:42:05.316389 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.316897 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:05 crc kubenswrapper[5112]: E1208 17:42:05.317024 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.405287 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.405335 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.405346 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.405364 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.405392 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.507565 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.507621 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.507642 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.507669 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.507688 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.610073 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.610171 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.610189 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.610219 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.610238 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.712105 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.712155 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.712168 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.712184 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.712196 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.753954 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"421c1c49a7ff024abb1ba074633a8a889a7d0a55b125aa8c2cc96de65f4585d4"} Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.768723 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.782070 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.792373 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.804483 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.814001 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.814060 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.814074 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.814112 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.814127 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.815119 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.836748 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.847670 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.856680 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://421c1c49a7ff024abb1ba074633a8a889a7d0a55b125aa8c2cc96de65f4585d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.867868 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.880997 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.892808 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.900991 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.912342 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.915627 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.915679 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.915692 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.915711 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.915723 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.921833 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.930892 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.942451 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.950069 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.966684 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:05 crc kubenswrapper[5112]: I1208 17:42:05.977191 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.017629 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.017680 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.017691 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.017707 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.017720 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.119984 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.120071 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.120115 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.120143 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.120160 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.223421 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.223582 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.223748 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.223780 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.223798 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.326123 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.326174 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.326187 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.326201 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.326212 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.427891 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.427965 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.427983 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.428012 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.428030 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.531159 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.531474 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.531555 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.531624 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.531687 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.633767 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.634355 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.634462 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.634572 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.634699 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.737552 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.737595 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.737606 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.737621 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.737631 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.759152 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4hrlr" event={"ID":"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6","Type":"ContainerStarted","Data":"fed422b7e0cecc71b17be69bf8ebd893f3583f2d3bd691103e41544d9924e6df"} Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.773954 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.784646 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.794511 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.836385 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.839598 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.839635 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.839646 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.839664 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.839677 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.851040 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.867840 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.885876 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.897350 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.907500 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://421c1c49a7ff024abb1ba074633a8a889a7d0a55b125aa8c2cc96de65f4585d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.916739 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.926024 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.936978 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.941135 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.941164 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.941175 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.941188 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.941198 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.945538 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.955228 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.964635 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.972833 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fed422b7e0cecc71b17be69bf8ebd893f3583f2d3bd691103e41544d9924e6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.981830 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:06 crc kubenswrapper[5112]: I1208 17:42:06.990118 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.003298 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.044157 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.044214 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.044226 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.044242 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.044252 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.146756 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.146822 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.146834 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.146853 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.146866 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.248921 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.248965 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.248976 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.248992 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.249003 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.319743 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:07 crc kubenswrapper[5112]: E1208 17:42:07.319837 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.320021 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:07 crc kubenswrapper[5112]: E1208 17:42:07.320105 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.320648 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:07 crc kubenswrapper[5112]: E1208 17:42:07.320706 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.321114 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:07 crc kubenswrapper[5112]: E1208 17:42:07.321170 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.351726 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.351771 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.351782 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.351798 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.351808 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.453718 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.453773 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.453783 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.453796 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.453805 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.556335 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.556378 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.556391 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.556407 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.556417 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.658466 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.658539 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.658553 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.658572 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.658584 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.760417 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.760450 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.760458 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.760470 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.760480 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.762138 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" event={"ID":"95e46da0-94bb-4d22-804b-b3018984cdac","Type":"ContainerStarted","Data":"06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153"} Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.862272 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.862592 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.862603 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.862616 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.862625 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.964464 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.964508 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.964522 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.964539 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5112]: I1208 17:42:07.964550 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.067073 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.067155 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.067178 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.067197 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.067209 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.169604 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.169641 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.169649 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.169663 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.169672 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.272109 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.272444 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.272459 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.272475 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.272487 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.374259 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.374298 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.374307 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.374320 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.374330 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.476374 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.476423 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.476434 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.476450 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.476462 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.578197 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.578431 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.578441 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.578454 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.578463 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.680616 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.680683 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.680707 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.680732 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.680751 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.767034 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" event={"ID":"95e46da0-94bb-4d22-804b-b3018984cdac","Type":"ContainerStarted","Data":"7e81eb6df709930174252fbfee132c752fdf972b294a437e8d67f812283e0aaf"} Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.771225 5112 generic.go:358] "Generic (PLEG): container finished" podID="575dcc54-1cfa-45ab-8c22-087fcf27f142" containerID="5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f" exitCode=0 Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.771306 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" event={"ID":"575dcc54-1cfa-45ab-8c22-087fcf27f142","Type":"ContainerDied","Data":"5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f"} Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.779598 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.783300 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.783344 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.783361 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.783380 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.783396 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.798926 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.810399 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.825514 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.836462 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.848708 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.858528 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.870304 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e81eb6df709930174252fbfee132c752fdf972b294a437e8d67f812283e0aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.887835 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.890018 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.890058 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.890097 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.890112 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.890139 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.900759 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.912307 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://421c1c49a7ff024abb1ba074633a8a889a7d0a55b125aa8c2cc96de65f4585d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.921821 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.932756 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.943979 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.951473 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.962676 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.970450 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.978482 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fed422b7e0cecc71b17be69bf8ebd893f3583f2d3bd691103e41544d9924e6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.987247 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.991529 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.991583 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.991592 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.991605 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.991615 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5112]: I1208 17:42:08.995481 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fed422b7e0cecc71b17be69bf8ebd893f3583f2d3bd691103e41544d9924e6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.003259 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.009197 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.026634 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.036734 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.048701 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.058618 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.070168 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.083264 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.091369 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e81eb6df709930174252fbfee132c752fdf972b294a437e8d67f812283e0aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.096306 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.096348 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.096358 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.096373 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.096383 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.118538 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.128854 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.140230 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://421c1c49a7ff024abb1ba074633a8a889a7d0a55b125aa8c2cc96de65f4585d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.150076 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.162068 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.172902 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.185066 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.194058 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.197828 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.197874 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.197886 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.197901 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.197912 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.202755 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.300114 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.300152 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.300160 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.300172 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.300182 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.320657 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.320856 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:09 crc kubenswrapper[5112]: E1208 17:42:09.320967 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.321809 5112 scope.go:117] "RemoveContainer" containerID="7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59" Dec 08 17:42:09 crc kubenswrapper[5112]: E1208 17:42:09.321957 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:42:09 crc kubenswrapper[5112]: E1208 17:42:09.322219 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.322303 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.322427 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:09 crc kubenswrapper[5112]: E1208 17:42:09.322583 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:09 crc kubenswrapper[5112]: E1208 17:42:09.322641 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.402803 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.402848 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.402859 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.402876 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.402888 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.505400 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.505483 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.505503 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.505525 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.505574 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.607265 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.607300 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.607309 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.607324 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.607334 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.709503 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.709550 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.709561 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.709578 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.709590 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.777711 5112 generic.go:358] "Generic (PLEG): container finished" podID="575dcc54-1cfa-45ab-8c22-087fcf27f142" containerID="0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba" exitCode=0 Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.777772 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" event={"ID":"575dcc54-1cfa-45ab-8c22-087fcf27f142","Type":"ContainerDied","Data":"0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba"} Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.779975 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rsc28" event={"ID":"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b","Type":"ContainerStarted","Data":"c0f319976ebd2aaaaee100e79566bb2ac503ed7ff46cc7526e370c7d9690a87d"} Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.791970 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.802943 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.812552 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.812598 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.812630 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.812648 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.812660 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.827763 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.838598 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.853534 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.863845 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.875335 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.883495 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.891397 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e81eb6df709930174252fbfee132c752fdf972b294a437e8d67f812283e0aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.914639 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.914675 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.914683 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.914698 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.914707 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.917805 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.933327 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.943692 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://421c1c49a7ff024abb1ba074633a8a889a7d0a55b125aa8c2cc96de65f4585d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.955411 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.963967 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.976204 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.984795 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:09 crc kubenswrapper[5112]: I1208 17:42:09.994399 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.002872 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.011720 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fed422b7e0cecc71b17be69bf8ebd893f3583f2d3bd691103e41544d9924e6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.017236 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.017269 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.017280 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.017293 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.017302 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.068446 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.081529 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.094636 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.109513 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.116868 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.129286 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.129324 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.129338 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.129354 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.129366 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.158459 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e81eb6df709930174252fbfee132c752fdf972b294a437e8d67f812283e0aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.191311 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.202781 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.214980 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://421c1c49a7ff024abb1ba074633a8a889a7d0a55b125aa8c2cc96de65f4585d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.225448 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.231607 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.231651 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.231663 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.231679 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.231689 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.235180 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.245655 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.253629 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.262265 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.270757 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.277999 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fed422b7e0cecc71b17be69bf8ebd893f3583f2d3bd691103e41544d9924e6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.286398 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.295411 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://c0f319976ebd2aaaaee100e79566bb2ac503ed7ff46cc7526e370c7d9690a87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.310062 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.335048 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.335111 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.335124 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.335141 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.335152 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.437858 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.437912 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.437924 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.437942 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.437953 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.540111 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.540139 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.540147 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.540162 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.540171 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.642202 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.642240 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.642249 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.642261 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.642275 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.744361 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.744406 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.744420 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.744436 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.744449 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.784024 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"0fef78ac2d3fb131513294bdfb02ff95c3266c5a6b5457e6b6eab3a8528ea992"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.785381 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kvv4v" event={"ID":"288ee203-be3f-4176-90b2-7d95ee47aee8","Type":"ContainerStarted","Data":"aeb0708a96645938003ab2d6f651e2c6c0996b2252673869e193349197d88b1f"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.787004 5112 generic.go:358] "Generic (PLEG): container finished" podID="0510de3f-316a-4902-a746-a746c3ce594c" containerID="ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072" exitCode=0 Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.787121 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerDied","Data":"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.789253 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" event={"ID":"575dcc54-1cfa-45ab-8c22-087fcf27f142","Type":"ContainerStarted","Data":"8868cf7cf317cd1c08317fc1777273909c4e86faec23dd9b43b394edc8cfd0f9"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.791358 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" event={"ID":"472d4dbe-4674-43ba-98da-98502eccb960","Type":"ContainerStarted","Data":"f144781c243b5270f65ed3ad052edfb4bd18a942565a3ad88814dfcfbff114c6"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.794432 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fed422b7e0cecc71b17be69bf8ebd893f3583f2d3bd691103e41544d9924e6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.803488 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.809940 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://c0f319976ebd2aaaaee100e79566bb2ac503ed7ff46cc7526e370c7d9690a87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.822293 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.830815 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.840635 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.846365 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.846404 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.846413 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.846427 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.846438 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.848488 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.856389 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://aeb0708a96645938003ab2d6f651e2c6c0996b2252673869e193349197d88b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.861884 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.868443 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e81eb6df709930174252fbfee132c752fdf972b294a437e8d67f812283e0aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.889393 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.901479 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.913589 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://421c1c49a7ff024abb1ba074633a8a889a7d0a55b125aa8c2cc96de65f4585d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.923585 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.935146 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.946285 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.953338 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.953416 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.953431 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.953451 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.953464 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.955491 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.970711 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.981177 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:10 crc kubenswrapper[5112]: I1208 17:42:10.993959 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.003973 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.013640 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.025797 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://aeb0708a96645938003ab2d6f651e2c6c0996b2252673869e193349197d88b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.039353 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.048201 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e81eb6df709930174252fbfee132c752fdf972b294a437e8d67f812283e0aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.055506 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.055546 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.055555 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.055568 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.055578 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.063954 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.079041 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.091197 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://421c1c49a7ff024abb1ba074633a8a889a7d0a55b125aa8c2cc96de65f4585d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.101273 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.110963 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.121529 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8868cf7cf317cd1c08317fc1777273909c4e86faec23dd9b43b394edc8cfd0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.129436 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.138061 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.157630 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.157677 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.157686 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.157699 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.157710 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.174072 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.214947 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fed422b7e0cecc71b17be69bf8ebd893f3583f2d3bd691103e41544d9924e6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.255981 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.259348 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.259414 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.259426 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.259439 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.259450 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.293451 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://c0f319976ebd2aaaaee100e79566bb2ac503ed7ff46cc7526e370c7d9690a87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.316607 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.316676 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.316683 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.316607 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:11 crc kubenswrapper[5112]: E1208 17:42:11.316778 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:11 crc kubenswrapper[5112]: E1208 17:42:11.316870 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:11 crc kubenswrapper[5112]: E1208 17:42:11.316922 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:11 crc kubenswrapper[5112]: E1208 17:42:11.317038 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.342953 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.361588 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.361617 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.361629 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.361642 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.361654 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.463674 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.463714 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.463726 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.463741 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.463751 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.565804 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.565843 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.565852 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.565865 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.565873 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.668929 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.668978 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.668989 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.669005 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.669015 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.771741 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.771819 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.771840 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.771867 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.771886 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.802402 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"082c488706500b52e3f00aa021216ce9091c5963bd9a268486be1db148ee70b3"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.807439 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerStarted","Data":"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.810435 5112 generic.go:358] "Generic (PLEG): container finished" podID="575dcc54-1cfa-45ab-8c22-087fcf27f142" containerID="8868cf7cf317cd1c08317fc1777273909c4e86faec23dd9b43b394edc8cfd0f9" exitCode=0 Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.810526 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" event={"ID":"575dcc54-1cfa-45ab-8c22-087fcf27f142","Type":"ContainerDied","Data":"8868cf7cf317cd1c08317fc1777273909c4e86faec23dd9b43b394edc8cfd0f9"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.814690 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" event={"ID":"472d4dbe-4674-43ba-98da-98502eccb960","Type":"ContainerStarted","Data":"0d4a0df1b413953ea22d933f6d1c17cce51ce61ba86fb54a1b5cef34411d7394"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.818325 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"a4897943cf2c226a1ef007512361ba6fe50ac3d03f2ddd631f4d11d849b57bcd"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.818420 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.830999 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d35301b2-73ca-44c7-bb4c-e7e68d41ac54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:31Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW1208 17:41:31.167389 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:31.167693 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 17:41:31.168628 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3936714883/tls.crt::/tmp/serving-cert-3936714883/tls.key\\\\\\\"\\\\nI1208 17:41:31.681853 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:31.683635 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:31.683651 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:31.683675 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:31.683681 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:31.690777 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 17:41:31.690804 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 17:41:31.690811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690838 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:31.690843 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:31.690848 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:31.690851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:31.690855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 17:41:31.693539 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.840419 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.856592 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-kvv4v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288ee203-be3f-4176-90b2-7d95ee47aee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://aeb0708a96645938003ab2d6f651e2c6c0996b2252673869e193349197d88b1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gbcf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kvv4v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.867408 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c4fb553-8514-4194-847c-96d40f8b41e3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mv6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jq8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.874469 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e81eb6df709930174252fbfee132c752fdf972b294a437e8d67f812283e0aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.874838 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.874960 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.875069 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.875167 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.875228 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.891348 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.901914 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.910719 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://421c1c49a7ff024abb1ba074633a8a889a7d0a55b125aa8c2cc96de65f4585d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.920292 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.929220 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://082c488706500b52e3f00aa021216ce9091c5963bd9a268486be1db148ee70b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0fef78ac2d3fb131513294bdfb02ff95c3266c5a6b5457e6b6eab3a8528ea992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.940187 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8868cf7cf317cd1c08317fc1777273909c4e86faec23dd9b43b394edc8cfd0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.946999 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.956404 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.965623 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.973833 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fed422b7e0cecc71b17be69bf8ebd893f3583f2d3bd691103e41544d9924e6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.976963 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.977114 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.977199 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.977296 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5112]: I1208 17:42:11.977379 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.016455 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.055597 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://c0f319976ebd2aaaaee100e79566bb2ac503ed7ff46cc7526e370c7d9690a87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.079648 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.079695 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.079708 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.079725 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.079738 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.102992 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.135526 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95e46da0-94bb-4d22-804b-b3018984cdac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7e81eb6df709930174252fbfee132c752fdf972b294a437e8d67f812283e0aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56lk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s6wzf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.182184 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.182231 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.182247 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.182263 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.182273 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.186876 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ad0b160-7036-4cfb-9738-1e0e8ebe1e5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://953eaf00aeddf0f031eb9db85dda27332777dd31ac6746dfdedcc13ed20cb02c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a7b9e7098ad13452cf8f0aa13c84480bf630b57c0296cec645e8fd4f030b13fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7ace3eb0fbb6c37ad43df89af7c25f6a0bda9c7e079a6bfb7683984630e7cd3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c9534bda3d71b68f6920f0c8a5dd54d3d31bac188d8fb76a1d29a3f5f0b621a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:27Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://16bdf27bbd7b756aec823f0df94a6a72c5ad978e71a5e24824de2ab45e54c0c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:26Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60a4953961d430db10a6fe21df995549ba6cbe84be5c4e7bfc3788c53c152bd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://276a86f68c5958c297d0c493b713318ea5ebe302666d6478a229eae2ffed90b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8759198baf7f22676f14d8513f3af488df4c58c4ed65db59aac2bff888a6d878\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.216039 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a7a500b-9152-4fa4-a5ef-7a037610043a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ea6605166b2660aac60c892c3aa4300f70f3c325fa54b0c5cebab4c59e7e44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://407e6dc04957ad635291d63043e12fc7751c6de36462219e6f8e991af59b523c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f47b69e17f8b8b7e2c46f449515d3eb8408a6ef649bf396eef3abeac2d4b2483\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.260465 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://421c1c49a7ff024abb1ba074633a8a889a7d0a55b125aa8c2cc96de65f4585d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.284826 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.284884 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.284903 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.284928 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.284948 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.301311 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.337076 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://082c488706500b52e3f00aa021216ce9091c5963bd9a268486be1db148ee70b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0fef78ac2d3fb131513294bdfb02ff95c3266c5a6b5457e6b6eab3a8528ea992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.378304 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"575dcc54-1cfa-45ab-8c22-087fcf27f142\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5254317985ea3f0c6e29ebbbc0afa8a7fcb3a10c22298efe7fa25998a259a60f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b951c8dc879fd4cce80d0ae2a1e38d4c02ef502e12b0ffda7cf719adcde31ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8868cf7cf317cd1c08317fc1777273909c4e86faec23dd9b43b394edc8cfd0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8868cf7cf317cd1c08317fc1777273909c4e86faec23dd9b43b394edc8cfd0f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c98z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9xjh5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.387883 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.387951 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.387969 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.387998 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.388017 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.415262 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"367e7840-8095-41c1-93ec-9c02ff4d243d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://63bd2b5515bea7e14b54005f1477f959aac15ff6b2771db37fc28e46eea6be70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86e97499087a3161ebf425803d6ac7e66513b7c73c759c730b8a17858f4e1b82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.457009 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.490143 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.490188 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.490198 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.490210 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.490219 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.498381 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"472d4dbe-4674-43ba-98da-98502eccb960\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://f144781c243b5270f65ed3ad052edfb4bd18a942565a3ad88814dfcfbff114c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://0d4a0df1b413953ea22d933f6d1c17cce51ce61ba86fb54a1b5cef34411d7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sv8p6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-b7fmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.533128 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4hrlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c78b0cc-34ce-48fe-aeb3-f84b04fc6af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://fed422b7e0cecc71b17be69bf8ebd893f3583f2d3bd691103e41544d9924e6df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-88g7z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4hrlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.586594 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.593421 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.593456 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.593465 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.593478 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.593487 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.614737 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-rsc28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a63a9eaf-972d-4a8f-a9e5-f0f397bf8e9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://c0f319976ebd2aaaaee100e79566bb2ac503ed7ff46cc7526e370c7d9690a87d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:42:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4pm48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rsc28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.661061 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0510de3f-316a-4902-a746-a746c3ce594c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:42:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vcrm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ng27z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.695488 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.695531 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.695542 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.695558 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.695567 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.701152 5112 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a54de98a-e0fb-42e6-9458-35bf008a1af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e1ad1521591e581cd357d3b49dde54e9a2c1a793edc8dced64f3acbe9f7f2ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://69cc882495a4c55c83d8793d16e873cde0e5c81bbf76ed52eec3ed59b99b937f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5e0e157a3ba41263bd7a39a6c64f50ccf232bc55ef3df90ffbbd314418ce69bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:25Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0018c38fa9031e6a1ae5e3b128294b6e93ace41c3e09982a94e2af830b208186\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:24Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.797422 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.797463 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.797473 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.797487 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.797497 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.827412 5112 generic.go:358] "Generic (PLEG): container finished" podID="575dcc54-1cfa-45ab-8c22-087fcf27f142" containerID="77fa5d1093e4a600c65b9db3f13236c3836e00d6e3a71efdfc13c2f61527d4ba" exitCode=0 Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.827510 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" event={"ID":"575dcc54-1cfa-45ab-8c22-087fcf27f142","Type":"ContainerDied","Data":"77fa5d1093e4a600c65b9db3f13236c3836e00d6e3a71efdfc13c2f61527d4ba"} Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.831582 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kvv4v" podStartSLOduration=90.831564928 podStartE2EDuration="1m30.831564928s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:12.831519026 +0000 UTC m=+109.841067737" watchObservedRunningTime="2025-12-08 17:42:12.831564928 +0000 UTC m=+109.841113619" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.832169 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerStarted","Data":"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa"} Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.832221 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerStarted","Data":"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b"} Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.900425 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.900504 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.900519 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.900536 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5112]: I1208 17:42:12.900548 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.002591 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.002629 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.002640 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.002655 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.002665 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.059479 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.059454769 podStartE2EDuration="32.059454769s" podCreationTimestamp="2025-12-08 17:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:13.017632484 +0000 UTC m=+110.027181185" watchObservedRunningTime="2025-12-08 17:42:13.059454769 +0000 UTC m=+110.069003470" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.105213 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.105893 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.105997 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.105636 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" podStartSLOduration=91.105609421 podStartE2EDuration="1m31.105609421s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:13.103798062 +0000 UTC m=+110.113346763" watchObservedRunningTime="2025-12-08 17:42:13.105609421 +0000 UTC m=+110.115158122" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.106117 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.106403 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.145470 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4hrlr" podStartSLOduration=91.145451653 podStartE2EDuration="1m31.145451653s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:13.144400915 +0000 UTC m=+110.153949616" watchObservedRunningTime="2025-12-08 17:42:13.145451653 +0000 UTC m=+110.155000374" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.208561 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.208602 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.208610 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.208625 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.208635 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.216951 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-rsc28" podStartSLOduration=91.216936396 podStartE2EDuration="1m31.216936396s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:13.215679232 +0000 UTC m=+110.225227933" watchObservedRunningTime="2025-12-08 17:42:13.216936396 +0000 UTC m=+110.226485097" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.280636 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.280709 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.280740 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.280769 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.280877 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.280911 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.280916 5112 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.280925 5112 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.280934 5112 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.281110 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.281155 5112 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.281175 5112 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.281184 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.281146254 +0000 UTC m=+142.290694955 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.281204 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.281198745 +0000 UTC m=+142.290747436 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.281288 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.281276957 +0000 UTC m=+142.290825648 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.281303 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.281298418 +0000 UTC m=+142.290847119 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.311203 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.311243 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.311252 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.311264 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.311273 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.317918 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.318016 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.318207 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.318284 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.318549 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.318625 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.318692 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.318755 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.340928 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=32.340909932 podStartE2EDuration="32.340909932s" podCreationTimestamp="2025-12-08 17:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:13.299278632 +0000 UTC m=+110.308827333" watchObservedRunningTime="2025-12-08 17:42:13.340909932 +0000 UTC m=+110.350458633" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.381769 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs\") pod \"network-metrics-daemon-7jq8h\" (UID: \"3c4fb553-8514-4194-847c-96d40f8b41e3\") " pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.381911 5112 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.381960 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs podName:3c4fb553-8514-4194-847c-96d40f8b41e3 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.381945916 +0000 UTC m=+142.391494617 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs") pod "network-metrics-daemon-7jq8h" (UID: "3c4fb553-8514-4194-847c-96d40f8b41e3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.412757 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.412798 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.412810 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.412825 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.412838 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.428888 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=32.428869019 podStartE2EDuration="32.428869019s" podCreationTimestamp="2025-12-08 17:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:13.427711397 +0000 UTC m=+110.437260088" watchObservedRunningTime="2025-12-08 17:42:13.428869019 +0000 UTC m=+110.438417730" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.429655 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podStartSLOduration=91.429647729 podStartE2EDuration="1m31.429647729s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:13.376102059 +0000 UTC m=+110.385650770" watchObservedRunningTime="2025-12-08 17:42:13.429647729 +0000 UTC m=+110.439196430" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.482394 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:13 crc kubenswrapper[5112]: E1208 17:42:13.482634 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.482616455 +0000 UTC m=+142.492165146 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.500019 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=32.500003822 podStartE2EDuration="32.500003822s" podCreationTimestamp="2025-12-08 17:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:13.460430978 +0000 UTC m=+110.469979689" watchObservedRunningTime="2025-12-08 17:42:13.500003822 +0000 UTC m=+110.509552523" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.515336 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.515375 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.515384 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.515398 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.515406 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.618119 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.618708 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.618797 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.618893 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.618995 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.721640 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.721689 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.721702 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.721719 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.721733 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.824427 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.824477 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.824490 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.824505 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.824517 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.838277 5112 generic.go:358] "Generic (PLEG): container finished" podID="575dcc54-1cfa-45ab-8c22-087fcf27f142" containerID="3a897807105895cb4fae63d970ad8e0a6b536656cf0fd7683b1558932f75ee3e" exitCode=0 Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.838368 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" event={"ID":"575dcc54-1cfa-45ab-8c22-087fcf27f142","Type":"ContainerDied","Data":"3a897807105895cb4fae63d970ad8e0a6b536656cf0fd7683b1558932f75ee3e"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.844743 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerStarted","Data":"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.844917 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerStarted","Data":"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.845008 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerStarted","Data":"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.926484 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.926531 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.926542 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.926555 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.926565 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.947045 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.947102 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.947112 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.947127 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.947139 5112 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5112]: I1208 17:42:13.992407 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92"] Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.062436 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.066809 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.066865 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.066824 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.067145 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.227153 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.227218 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.227249 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.227265 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.227299 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.328228 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.328276 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.328292 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.328379 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.328448 5112 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.328480 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.328459 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.328583 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.329564 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.339285 5112 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.343035 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.349834 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-qhm92\" (UID: \"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.436956 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" Dec 08 17:42:14 crc kubenswrapper[5112]: W1208 17:42:14.450721 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89a393f9_32a2_4ce9_9e4a_c7ffd762f3a8.slice/crio-c8f7b5bb68b8508f1c848d13c05df7a2b9c250c7a249ad6f9b302cb3e8c3f62e WatchSource:0}: Error finding container c8f7b5bb68b8508f1c848d13c05df7a2b9c250c7a249ad6f9b302cb3e8c3f62e: Status 404 returned error can't find the container with id c8f7b5bb68b8508f1c848d13c05df7a2b9c250c7a249ad6f9b302cb3e8c3f62e Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.848205 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" event={"ID":"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8","Type":"ContainerStarted","Data":"5dfa693e99cc90872fc5f4e53a9993c44a784c30a92b810607d6c78cf4ade81c"} Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.849354 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" event={"ID":"89a393f9-32a2-4ce9-9e4a-c7ffd762f3a8","Type":"ContainerStarted","Data":"c8f7b5bb68b8508f1c848d13c05df7a2b9c250c7a249ad6f9b302cb3e8c3f62e"} Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.851454 5112 generic.go:358] "Generic (PLEG): container finished" podID="575dcc54-1cfa-45ab-8c22-087fcf27f142" containerID="106929cfbf0423e0f9c3b446d2b55ec1d0244d911b61cda36017b3e48f51f3e9" exitCode=0 Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.851498 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" event={"ID":"575dcc54-1cfa-45ab-8c22-087fcf27f142","Type":"ContainerDied","Data":"106929cfbf0423e0f9c3b446d2b55ec1d0244d911b61cda36017b3e48f51f3e9"} Dec 08 17:42:14 crc kubenswrapper[5112]: I1208 17:42:14.893985 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-qhm92" podStartSLOduration=92.893963309 podStartE2EDuration="1m32.893963309s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:14.863198971 +0000 UTC m=+111.872747672" watchObservedRunningTime="2025-12-08 17:42:14.893963309 +0000 UTC m=+111.903512010" Dec 08 17:42:15 crc kubenswrapper[5112]: I1208 17:42:15.319847 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:15 crc kubenswrapper[5112]: E1208 17:42:15.319942 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:15 crc kubenswrapper[5112]: I1208 17:42:15.319951 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:15 crc kubenswrapper[5112]: I1208 17:42:15.320003 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:15 crc kubenswrapper[5112]: E1208 17:42:15.320151 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:15 crc kubenswrapper[5112]: E1208 17:42:15.320256 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:15 crc kubenswrapper[5112]: I1208 17:42:15.320801 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:15 crc kubenswrapper[5112]: E1208 17:42:15.321149 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:15 crc kubenswrapper[5112]: I1208 17:42:15.858139 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerStarted","Data":"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783"} Dec 08 17:42:15 crc kubenswrapper[5112]: I1208 17:42:15.861959 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" event={"ID":"575dcc54-1cfa-45ab-8c22-087fcf27f142","Type":"ContainerStarted","Data":"aeca080c1b76ef36e09e85708c680c8322750869480860a7385ab64735e7c23f"} Dec 08 17:42:15 crc kubenswrapper[5112]: I1208 17:42:15.883670 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9xjh5" podStartSLOduration=93.883651377 podStartE2EDuration="1m33.883651377s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:15.883582296 +0000 UTC m=+112.893130997" watchObservedRunningTime="2025-12-08 17:42:15.883651377 +0000 UTC m=+112.893200078" Dec 08 17:42:17 crc kubenswrapper[5112]: I1208 17:42:17.316586 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:17 crc kubenswrapper[5112]: I1208 17:42:17.316617 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:17 crc kubenswrapper[5112]: I1208 17:42:17.316630 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:17 crc kubenswrapper[5112]: E1208 17:42:17.316718 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:17 crc kubenswrapper[5112]: E1208 17:42:17.316856 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:17 crc kubenswrapper[5112]: E1208 17:42:17.316919 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:17 crc kubenswrapper[5112]: I1208 17:42:17.316951 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:17 crc kubenswrapper[5112]: E1208 17:42:17.317074 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:18 crc kubenswrapper[5112]: I1208 17:42:18.875823 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerStarted","Data":"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a"} Dec 08 17:42:18 crc kubenswrapper[5112]: I1208 17:42:18.876179 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:42:18 crc kubenswrapper[5112]: I1208 17:42:18.876205 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:42:18 crc kubenswrapper[5112]: I1208 17:42:18.896748 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:42:18 crc kubenswrapper[5112]: I1208 17:42:18.909477 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" podStartSLOduration=96.909462979 podStartE2EDuration="1m36.909462979s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:18.908851963 +0000 UTC m=+115.918400694" watchObservedRunningTime="2025-12-08 17:42:18.909462979 +0000 UTC m=+115.919011680" Dec 08 17:42:19 crc kubenswrapper[5112]: I1208 17:42:19.322556 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:19 crc kubenswrapper[5112]: I1208 17:42:19.322559 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:19 crc kubenswrapper[5112]: E1208 17:42:19.323159 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:19 crc kubenswrapper[5112]: I1208 17:42:19.322662 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:19 crc kubenswrapper[5112]: E1208 17:42:19.323222 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:19 crc kubenswrapper[5112]: E1208 17:42:19.323290 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:19 crc kubenswrapper[5112]: I1208 17:42:19.322651 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:19 crc kubenswrapper[5112]: E1208 17:42:19.323414 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:19 crc kubenswrapper[5112]: I1208 17:42:19.880111 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:42:19 crc kubenswrapper[5112]: I1208 17:42:19.906701 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:42:21 crc kubenswrapper[5112]: I1208 17:42:21.191614 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7jq8h"] Dec 08 17:42:21 crc kubenswrapper[5112]: I1208 17:42:21.191778 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:21 crc kubenswrapper[5112]: E1208 17:42:21.191883 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:21 crc kubenswrapper[5112]: I1208 17:42:21.316249 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:21 crc kubenswrapper[5112]: I1208 17:42:21.316309 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:21 crc kubenswrapper[5112]: E1208 17:42:21.316393 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:21 crc kubenswrapper[5112]: I1208 17:42:21.316455 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:21 crc kubenswrapper[5112]: E1208 17:42:21.316650 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:21 crc kubenswrapper[5112]: E1208 17:42:21.316740 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:23 crc kubenswrapper[5112]: E1208 17:42:23.276701 5112 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Dec 08 17:42:23 crc kubenswrapper[5112]: I1208 17:42:23.315855 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:23 crc kubenswrapper[5112]: I1208 17:42:23.315986 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:23 crc kubenswrapper[5112]: E1208 17:42:23.315993 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:23 crc kubenswrapper[5112]: E1208 17:42:23.322415 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:23 crc kubenswrapper[5112]: I1208 17:42:23.322450 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:23 crc kubenswrapper[5112]: I1208 17:42:23.322483 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:23 crc kubenswrapper[5112]: E1208 17:42:23.323183 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:23 crc kubenswrapper[5112]: E1208 17:42:23.323960 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:23 crc kubenswrapper[5112]: I1208 17:42:23.324924 5112 scope.go:117] "RemoveContainer" containerID="7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59" Dec 08 17:42:23 crc kubenswrapper[5112]: E1208 17:42:23.402519 5112 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 08 17:42:23 crc kubenswrapper[5112]: I1208 17:42:23.899102 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:42:23 crc kubenswrapper[5112]: I1208 17:42:23.901294 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7ef6cbbdf721c409b69927a89643bb23313daa2a55d76df271c59ced0881af9d"} Dec 08 17:42:23 crc kubenswrapper[5112]: I1208 17:42:23.901715 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:23 crc kubenswrapper[5112]: I1208 17:42:23.921407 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=42.921391119 podStartE2EDuration="42.921391119s" podCreationTimestamp="2025-12-08 17:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:23.921388389 +0000 UTC m=+120.930937090" watchObservedRunningTime="2025-12-08 17:42:23.921391119 +0000 UTC m=+120.930939820" Dec 08 17:42:25 crc kubenswrapper[5112]: I1208 17:42:25.316726 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:25 crc kubenswrapper[5112]: I1208 17:42:25.316737 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:25 crc kubenswrapper[5112]: I1208 17:42:25.316741 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:25 crc kubenswrapper[5112]: E1208 17:42:25.317035 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:25 crc kubenswrapper[5112]: I1208 17:42:25.317123 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:25 crc kubenswrapper[5112]: E1208 17:42:25.317183 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:25 crc kubenswrapper[5112]: E1208 17:42:25.317209 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:25 crc kubenswrapper[5112]: E1208 17:42:25.317341 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:27 crc kubenswrapper[5112]: I1208 17:42:27.316472 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:27 crc kubenswrapper[5112]: I1208 17:42:27.316512 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:27 crc kubenswrapper[5112]: I1208 17:42:27.316520 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:27 crc kubenswrapper[5112]: I1208 17:42:27.316476 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:27 crc kubenswrapper[5112]: E1208 17:42:27.316648 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jq8h" podUID="3c4fb553-8514-4194-847c-96d40f8b41e3" Dec 08 17:42:27 crc kubenswrapper[5112]: E1208 17:42:27.316731 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:27 crc kubenswrapper[5112]: E1208 17:42:27.316906 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:27 crc kubenswrapper[5112]: E1208 17:42:27.317134 5112 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:29 crc kubenswrapper[5112]: I1208 17:42:29.316121 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:29 crc kubenswrapper[5112]: I1208 17:42:29.316173 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:29 crc kubenswrapper[5112]: I1208 17:42:29.316291 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:29 crc kubenswrapper[5112]: I1208 17:42:29.316336 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:29 crc kubenswrapper[5112]: I1208 17:42:29.318066 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 17:42:29 crc kubenswrapper[5112]: I1208 17:42:29.318856 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5112]: I1208 17:42:29.318904 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 17:42:29 crc kubenswrapper[5112]: I1208 17:42:29.319073 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5112]: I1208 17:42:29.319420 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 17:42:29 crc kubenswrapper[5112]: I1208 17:42:29.319457 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.609930 5112 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.641371 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-p8dgq"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.658989 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-9rvxw"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.659220 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.662897 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.663245 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.673060 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-n6jr7"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.673567 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.673808 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.673943 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.675598 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.675849 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.676449 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.678106 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.678773 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.682369 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.682460 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.682807 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.685657 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-7q49w"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.686333 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.686841 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.687063 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.687229 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.687944 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.688223 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.690400 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.690444 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-m2rqt"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.690603 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.690953 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.691305 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.691334 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.691416 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.691499 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.691606 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.692145 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.692253 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.692378 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.692514 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.692677 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.692800 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.693034 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.693273 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.694177 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.691306 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.698824 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-74dth"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.699219 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.702409 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-gv282"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.702534 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.705107 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.705962 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-gv282" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.726563 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.726996 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.726563 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.727246 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.727921 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.728230 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.729167 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.729809 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.730422 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.730890 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.732823 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.732894 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.733329 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.733408 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.733500 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.733646 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.733719 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.733771 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.733875 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.734006 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.734112 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.734021 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.734293 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.734383 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.734417 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.734439 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.734620 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.734758 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.734790 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.734989 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.735360 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.735540 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.737066 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.739618 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.740817 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.743318 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.746369 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.747933 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.748045 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.748828 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.749331 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.749483 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.750406 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.753622 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.755644 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.757230 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770527 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e175a7a0-9b51-4b5d-b85a-dd604a3db837-service-ca\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770567 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770586 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2mq6\" (UniqueName: \"kubernetes.io/projected/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-kube-api-access-l2mq6\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770605 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af065ece-a0e6-49a0-ba5e-21875f49cbd2-config\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770621 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhplh\" (UniqueName: \"kubernetes.io/projected/de7615f0-5173-4b64-8f4d-ba4da37884b6-kube-api-access-nhplh\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770638 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19fcd464-915f-4883-8da8-c4dffba0bbbd-audit-dir\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770663 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-audit-policies\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770678 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770707 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/210c8180-5efd-403d-bc10-32004b40c0dc-etcd-serving-ca\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770721 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/210c8180-5efd-403d-bc10-32004b40c0dc-encryption-config\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770737 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/754d2239-a1f1-4950-af6d-5f18fcc9b2db-config\") pod \"console-operator-67c89758df-7q49w\" (UID: \"754d2239-a1f1-4950-af6d-5f18fcc9b2db\") " pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770752 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl9hl\" (UniqueName: \"kubernetes.io/projected/d032b7b0-4a86-448c-b592-dd1633f1152e-kube-api-access-wl9hl\") pod \"openshift-apiserver-operator-846cbfc458-f7bpv\" (UID: \"d032b7b0-4a86-448c-b592-dd1633f1152e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770767 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-config\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770781 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6qgc\" (UniqueName: \"kubernetes.io/projected/3b27b80a-df1a-4a29-82d6-384db5b6612e-kube-api-access-r6qgc\") pod \"downloads-747b44746d-gv282\" (UID: \"3b27b80a-df1a-4a29-82d6-384db5b6612e\") " pod="openshift-console/downloads-747b44746d-gv282" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770798 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-865bw\" (UniqueName: \"kubernetes.io/projected/e175a7a0-9b51-4b5d-b85a-dd604a3db837-kube-api-access-865bw\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770812 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fcd464-915f-4883-8da8-c4dffba0bbbd-serving-cert\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770836 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e175a7a0-9b51-4b5d-b85a-dd604a3db837-trusted-ca-bundle\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770850 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/210c8180-5efd-403d-bc10-32004b40c0dc-audit-dir\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770863 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-client-ca\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770876 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d032b7b0-4a86-448c-b592-dd1633f1152e-config\") pod \"openshift-apiserver-operator-846cbfc458-f7bpv\" (UID: \"d032b7b0-4a86-448c-b592-dd1633f1152e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770890 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770906 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e175a7a0-9b51-4b5d-b85a-dd604a3db837-console-config\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770920 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6410ce59-323e-498a-b4f6-fe662a4c2d9b-images\") pod \"machine-api-operator-755bb95488-9rvxw\" (UID: \"6410ce59-323e-498a-b4f6-fe662a4c2d9b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770935 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b42st\" (UniqueName: \"kubernetes.io/projected/280972fd-54d5-4bd4-824f-6e5d16f77f21-kube-api-access-b42st\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770948 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770965 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af065ece-a0e6-49a0-ba5e-21875f49cbd2-serving-cert\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.770980 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-serving-cert\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771057 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af065ece-a0e6-49a0-ba5e-21875f49cbd2-client-ca\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771072 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19fcd464-915f-4883-8da8-c4dffba0bbbd-etcd-client\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771126 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-image-import-ca\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771143 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ljt8\" (UniqueName: \"kubernetes.io/projected/754d2239-a1f1-4950-af6d-5f18fcc9b2db-kube-api-access-6ljt8\") pod \"console-operator-67c89758df-7q49w\" (UID: \"754d2239-a1f1-4950-af6d-5f18fcc9b2db\") " pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771157 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280972fd-54d5-4bd4-824f-6e5d16f77f21-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771175 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e175a7a0-9b51-4b5d-b85a-dd604a3db837-console-oauth-config\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771190 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vl89\" (UniqueName: \"kubernetes.io/projected/82cb6e24-6805-46a7-8f49-7d48eb8684fe-kube-api-access-7vl89\") pod \"cluster-samples-operator-6b564684c8-rc5qq\" (UID: \"82cb6e24-6805-46a7-8f49-7d48eb8684fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771214 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/210c8180-5efd-403d-bc10-32004b40c0dc-etcd-client\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771230 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/280972fd-54d5-4bd4-824f-6e5d16f77f21-serving-cert\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771244 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/754d2239-a1f1-4950-af6d-5f18fcc9b2db-trusted-ca\") pod \"console-operator-67c89758df-7q49w\" (UID: \"754d2239-a1f1-4950-af6d-5f18fcc9b2db\") " pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771258 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/19fcd464-915f-4883-8da8-c4dffba0bbbd-node-pullsecrets\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771273 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/19fcd464-915f-4883-8da8-c4dffba0bbbd-encryption-config\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771288 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771306 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6410ce59-323e-498a-b4f6-fe662a4c2d9b-config\") pod \"machine-api-operator-755bb95488-9rvxw\" (UID: \"6410ce59-323e-498a-b4f6-fe662a4c2d9b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771321 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771340 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-audit\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771355 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771371 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwq8t\" (UniqueName: \"kubernetes.io/projected/af065ece-a0e6-49a0-ba5e-21875f49cbd2-kube-api-access-jwq8t\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771385 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210c8180-5efd-403d-bc10-32004b40c0dc-serving-cert\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771401 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6410ce59-323e-498a-b4f6-fe662a4c2d9b-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-9rvxw\" (UID: \"6410ce59-323e-498a-b4f6-fe662a4c2d9b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771417 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771434 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvznq\" (UniqueName: \"kubernetes.io/projected/6410ce59-323e-498a-b4f6-fe662a4c2d9b-kube-api-access-kvznq\") pod \"machine-api-operator-755bb95488-9rvxw\" (UID: \"6410ce59-323e-498a-b4f6-fe662a4c2d9b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771448 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771468 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/210c8180-5efd-403d-bc10-32004b40c0dc-audit-policies\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771483 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d032b7b0-4a86-448c-b592-dd1633f1152e-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-f7bpv\" (UID: \"d032b7b0-4a86-448c-b592-dd1633f1152e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771501 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e175a7a0-9b51-4b5d-b85a-dd604a3db837-console-serving-cert\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771514 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771530 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-config\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771544 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-tmp\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771558 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771572 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280972fd-54d5-4bd4-824f-6e5d16f77f21-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771588 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771603 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771623 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af065ece-a0e6-49a0-ba5e-21875f49cbd2-tmp\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771636 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e175a7a0-9b51-4b5d-b85a-dd604a3db837-oauth-serving-cert\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771655 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/82cb6e24-6805-46a7-8f49-7d48eb8684fe-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-rc5qq\" (UID: \"82cb6e24-6805-46a7-8f49-7d48eb8684fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771670 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771685 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/754d2239-a1f1-4950-af6d-5f18fcc9b2db-serving-cert\") pod \"console-operator-67c89758df-7q49w\" (UID: \"754d2239-a1f1-4950-af6d-5f18fcc9b2db\") " pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771700 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de7615f0-5173-4b64-8f4d-ba4da37884b6-audit-dir\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771730 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4brt\" (UniqueName: \"kubernetes.io/projected/19fcd464-915f-4883-8da8-c4dffba0bbbd-kube-api-access-j4brt\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771870 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/280972fd-54d5-4bd4-824f-6e5d16f77f21-config\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771908 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/210c8180-5efd-403d-bc10-32004b40c0dc-trusted-ca-bundle\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.771928 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn55l\" (UniqueName: \"kubernetes.io/projected/210c8180-5efd-403d-bc10-32004b40c0dc-kube-api-access-mn55l\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.807554 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-cxzx8"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.808161 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.811367 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.811867 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.812486 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.813234 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.814646 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.814679 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.814879 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.814921 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.815046 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.815194 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.815273 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.815324 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.815397 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.825565 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-hc5xj"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.825997 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.826704 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.826910 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.827161 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.827242 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.827297 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.827401 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.827603 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.828633 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.828825 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.829112 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.832383 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.832539 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.832673 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.833023 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.835352 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.835445 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.836903 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.840822 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.841433 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.846597 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.861379 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872369 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af065ece-a0e6-49a0-ba5e-21875f49cbd2-serving-cert\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872403 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-serving-cert\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872423 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af065ece-a0e6-49a0-ba5e-21875f49cbd2-client-ca\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872437 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19fcd464-915f-4883-8da8-c4dffba0bbbd-etcd-client\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872452 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-image-import-ca\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872467 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6ljt8\" (UniqueName: \"kubernetes.io/projected/754d2239-a1f1-4950-af6d-5f18fcc9b2db-kube-api-access-6ljt8\") pod \"console-operator-67c89758df-7q49w\" (UID: \"754d2239-a1f1-4950-af6d-5f18fcc9b2db\") " pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872485 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280972fd-54d5-4bd4-824f-6e5d16f77f21-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872687 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e175a7a0-9b51-4b5d-b85a-dd604a3db837-console-oauth-config\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872711 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7vl89\" (UniqueName: \"kubernetes.io/projected/82cb6e24-6805-46a7-8f49-7d48eb8684fe-kube-api-access-7vl89\") pod \"cluster-samples-operator-6b564684c8-rc5qq\" (UID: \"82cb6e24-6805-46a7-8f49-7d48eb8684fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872726 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/210c8180-5efd-403d-bc10-32004b40c0dc-etcd-client\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872741 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/280972fd-54d5-4bd4-824f-6e5d16f77f21-serving-cert\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872760 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-serving-cert\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872776 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-config\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872791 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/754d2239-a1f1-4950-af6d-5f18fcc9b2db-trusted-ca\") pod \"console-operator-67c89758df-7q49w\" (UID: \"754d2239-a1f1-4950-af6d-5f18fcc9b2db\") " pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872806 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/19fcd464-915f-4883-8da8-c4dffba0bbbd-node-pullsecrets\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872820 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/19fcd464-915f-4883-8da8-c4dffba0bbbd-encryption-config\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872835 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872854 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb7829a6-bbd3-49f8-8dc2-8a605fe4b138-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-82k2c\" (UID: \"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872879 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6410ce59-323e-498a-b4f6-fe662a4c2d9b-config\") pod \"machine-api-operator-755bb95488-9rvxw\" (UID: \"6410ce59-323e-498a-b4f6-fe662a4c2d9b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872894 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872915 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-audit\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872933 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872950 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-tmp-dir\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872968 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jwq8t\" (UniqueName: \"kubernetes.io/projected/af065ece-a0e6-49a0-ba5e-21875f49cbd2-kube-api-access-jwq8t\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.872984 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210c8180-5efd-403d-bc10-32004b40c0dc-serving-cert\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873003 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6410ce59-323e-498a-b4f6-fe662a4c2d9b-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-9rvxw\" (UID: \"6410ce59-323e-498a-b4f6-fe662a4c2d9b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873019 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873041 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kvznq\" (UniqueName: \"kubernetes.io/projected/6410ce59-323e-498a-b4f6-fe662a4c2d9b-kube-api-access-kvznq\") pod \"machine-api-operator-755bb95488-9rvxw\" (UID: \"6410ce59-323e-498a-b4f6-fe662a4c2d9b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873060 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873109 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/500cfd87-2e0f-4321-a7d5-f19d851aafc9-tmp-dir\") pod \"dns-operator-799b87ffcd-cxzx8\" (UID: \"500cfd87-2e0f-4321-a7d5-f19d851aafc9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873124 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk4mk\" (UniqueName: \"kubernetes.io/projected/500cfd87-2e0f-4321-a7d5-f19d851aafc9-kube-api-access-tk4mk\") pod \"dns-operator-799b87ffcd-cxzx8\" (UID: \"500cfd87-2e0f-4321-a7d5-f19d851aafc9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873143 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/210c8180-5efd-403d-bc10-32004b40c0dc-audit-policies\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873160 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d032b7b0-4a86-448c-b592-dd1633f1152e-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-f7bpv\" (UID: \"d032b7b0-4a86-448c-b592-dd1633f1152e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873178 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/98f49f4b-546f-43bb-bfa3-c6966837ab7c-tmp\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873192 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-etcd-client\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873211 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e175a7a0-9b51-4b5d-b85a-dd604a3db837-console-serving-cert\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873226 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873243 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-config\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873258 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-tmp\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873274 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873288 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280972fd-54d5-4bd4-824f-6e5d16f77f21-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873303 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873318 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873338 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af065ece-a0e6-49a0-ba5e-21875f49cbd2-tmp\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873357 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e175a7a0-9b51-4b5d-b85a-dd604a3db837-oauth-serving-cert\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873374 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/82cb6e24-6805-46a7-8f49-7d48eb8684fe-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-rc5qq\" (UID: \"82cb6e24-6805-46a7-8f49-7d48eb8684fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873388 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873404 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/754d2239-a1f1-4950-af6d-5f18fcc9b2db-serving-cert\") pod \"console-operator-67c89758df-7q49w\" (UID: \"754d2239-a1f1-4950-af6d-5f18fcc9b2db\") " pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873418 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de7615f0-5173-4b64-8f4d-ba4da37884b6-audit-dir\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873433 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/500cfd87-2e0f-4321-a7d5-f19d851aafc9-metrics-tls\") pod \"dns-operator-799b87ffcd-cxzx8\" (UID: \"500cfd87-2e0f-4321-a7d5-f19d851aafc9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873457 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j4brt\" (UniqueName: \"kubernetes.io/projected/19fcd464-915f-4883-8da8-c4dffba0bbbd-kube-api-access-j4brt\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873473 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/280972fd-54d5-4bd4-824f-6e5d16f77f21-config\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873492 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/210c8180-5efd-403d-bc10-32004b40c0dc-trusted-ca-bundle\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873507 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mn55l\" (UniqueName: \"kubernetes.io/projected/210c8180-5efd-403d-bc10-32004b40c0dc-kube-api-access-mn55l\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873522 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-etcd-ca\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873536 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvcvt\" (UniqueName: \"kubernetes.io/projected/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-kube-api-access-zvcvt\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873554 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e175a7a0-9b51-4b5d-b85a-dd604a3db837-service-ca\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873571 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873566 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af065ece-a0e6-49a0-ba5e-21875f49cbd2-client-ca\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873586 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l2mq6\" (UniqueName: \"kubernetes.io/projected/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-kube-api-access-l2mq6\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873603 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/98f49f4b-546f-43bb-bfa3-c6966837ab7c-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.873654 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-image-import-ca\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.874288 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.874835 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af065ece-a0e6-49a0-ba5e-21875f49cbd2-config\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.874944 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nhplh\" (UniqueName: \"kubernetes.io/projected/de7615f0-5173-4b64-8f4d-ba4da37884b6-kube-api-access-nhplh\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.875185 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19fcd464-915f-4883-8da8-c4dffba0bbbd-audit-dir\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.875299 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/98f49f4b-546f-43bb-bfa3-c6966837ab7c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.875429 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-audit-policies\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.875531 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.875740 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/210c8180-5efd-403d-bc10-32004b40c0dc-etcd-serving-ca\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.875852 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/210c8180-5efd-403d-bc10-32004b40c0dc-encryption-config\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.875953 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98f49f4b-546f-43bb-bfa3-c6966837ab7c-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.876050 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bb7829a6-bbd3-49f8-8dc2-8a605fe4b138-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-82k2c\" (UID: \"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.875965 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/19fcd464-915f-4883-8da8-c4dffba0bbbd-node-pullsecrets\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.874874 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280972fd-54d5-4bd4-824f-6e5d16f77f21-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.875896 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de7615f0-5173-4b64-8f4d-ba4da37884b6-audit-dir\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.876441 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-tmp\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.876645 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6410ce59-323e-498a-b4f6-fe662a4c2d9b-config\") pod \"machine-api-operator-755bb95488-9rvxw\" (UID: \"6410ce59-323e-498a-b4f6-fe662a4c2d9b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.876697 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/19fcd464-915f-4883-8da8-c4dffba0bbbd-audit-dir\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.876740 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb7829a6-bbd3-49f8-8dc2-8a605fe4b138-config\") pod \"openshift-controller-manager-operator-686468bdd5-82k2c\" (UID: \"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877121 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/754d2239-a1f1-4950-af6d-5f18fcc9b2db-config\") pod \"console-operator-67c89758df-7q49w\" (UID: \"754d2239-a1f1-4950-af6d-5f18fcc9b2db\") " pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877152 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wl9hl\" (UniqueName: \"kubernetes.io/projected/d032b7b0-4a86-448c-b592-dd1633f1152e-kube-api-access-wl9hl\") pod \"openshift-apiserver-operator-846cbfc458-f7bpv\" (UID: \"d032b7b0-4a86-448c-b592-dd1633f1152e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877178 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-config\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877198 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r6qgc\" (UniqueName: \"kubernetes.io/projected/3b27b80a-df1a-4a29-82d6-384db5b6612e-kube-api-access-r6qgc\") pod \"downloads-747b44746d-gv282\" (UID: \"3b27b80a-df1a-4a29-82d6-384db5b6612e\") " pod="openshift-console/downloads-747b44746d-gv282" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877218 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-865bw\" (UniqueName: \"kubernetes.io/projected/e175a7a0-9b51-4b5d-b85a-dd604a3db837-kube-api-access-865bw\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877235 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fcd464-915f-4883-8da8-c4dffba0bbbd-serving-cert\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877255 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-etcd-service-ca\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877391 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq69n\" (UniqueName: \"kubernetes.io/projected/98f49f4b-546f-43bb-bfa3-c6966837ab7c-kube-api-access-tq69n\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877461 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e175a7a0-9b51-4b5d-b85a-dd604a3db837-trusted-ca-bundle\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877510 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/210c8180-5efd-403d-bc10-32004b40c0dc-audit-dir\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877543 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-client-ca\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877569 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d032b7b0-4a86-448c-b592-dd1633f1152e-config\") pod \"openshift-apiserver-operator-846cbfc458-f7bpv\" (UID: \"d032b7b0-4a86-448c-b592-dd1633f1152e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877595 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.877626 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e175a7a0-9b51-4b5d-b85a-dd604a3db837-console-config\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.878437 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-audit\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.878810 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280972fd-54d5-4bd4-824f-6e5d16f77f21-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.878821 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/280972fd-54d5-4bd4-824f-6e5d16f77f21-config\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.878946 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.879308 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-config\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.879492 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/210c8180-5efd-403d-bc10-32004b40c0dc-etcd-serving-ca\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.879676 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19fcd464-915f-4883-8da8-c4dffba0bbbd-etcd-client\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.879939 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6410ce59-323e-498a-b4f6-fe662a4c2d9b-images\") pod \"machine-api-operator-755bb95488-9rvxw\" (UID: \"6410ce59-323e-498a-b4f6-fe662a4c2d9b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.881633 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b42st\" (UniqueName: \"kubernetes.io/projected/280972fd-54d5-4bd4-824f-6e5d16f77f21-kube-api-access-b42st\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.881796 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.881874 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.880397 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-config\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.880831 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/280972fd-54d5-4bd4-824f-6e5d16f77f21-serving-cert\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.880988 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d032b7b0-4a86-448c-b592-dd1633f1152e-config\") pod \"openshift-apiserver-operator-846cbfc458-f7bpv\" (UID: \"d032b7b0-4a86-448c-b592-dd1633f1152e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.881135 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-client-ca\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.881175 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/210c8180-5efd-403d-bc10-32004b40c0dc-audit-dir\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.881540 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-vpxb8"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.882050 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e175a7a0-9b51-4b5d-b85a-dd604a3db837-oauth-serving-cert\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.882201 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e175a7a0-9b51-4b5d-b85a-dd604a3db837-trusted-ca-bundle\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.882267 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.882389 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98f49f4b-546f-43bb-bfa3-c6966837ab7c-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.882492 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6qwr\" (UniqueName: \"kubernetes.io/projected/bb7829a6-bbd3-49f8-8dc2-8a605fe4b138-kube-api-access-t6qwr\") pod \"openshift-controller-manager-operator-686468bdd5-82k2c\" (UID: \"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.882671 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.882765 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/754d2239-a1f1-4950-af6d-5f18fcc9b2db-config\") pod \"console-operator-67c89758df-7q49w\" (UID: \"754d2239-a1f1-4950-af6d-5f18fcc9b2db\") " pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.881674 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/754d2239-a1f1-4950-af6d-5f18fcc9b2db-trusted-ca\") pod \"console-operator-67c89758df-7q49w\" (UID: \"754d2239-a1f1-4950-af6d-5f18fcc9b2db\") " pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.883378 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-serving-cert\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.883384 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6410ce59-323e-498a-b4f6-fe662a4c2d9b-images\") pod \"machine-api-operator-755bb95488-9rvxw\" (UID: \"6410ce59-323e-498a-b4f6-fe662a4c2d9b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.883375 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.883499 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.883554 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/82cb6e24-6805-46a7-8f49-7d48eb8684fe-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-rc5qq\" (UID: \"82cb6e24-6805-46a7-8f49-7d48eb8684fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.883682 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210c8180-5efd-403d-bc10-32004b40c0dc-serving-cert\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.883884 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e175a7a0-9b51-4b5d-b85a-dd604a3db837-console-oauth-config\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.884108 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af065ece-a0e6-49a0-ba5e-21875f49cbd2-serving-cert\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.884315 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.884440 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af065ece-a0e6-49a0-ba5e-21875f49cbd2-tmp\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.884441 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d032b7b0-4a86-448c-b592-dd1633f1152e-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-f7bpv\" (UID: \"d032b7b0-4a86-448c-b592-dd1633f1152e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.880223 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/210c8180-5efd-403d-bc10-32004b40c0dc-audit-policies\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.880132 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af065ece-a0e6-49a0-ba5e-21875f49cbd2-config\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.880360 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.885167 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/210c8180-5efd-403d-bc10-32004b40c0dc-etcd-client\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.885592 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e175a7a0-9b51-4b5d-b85a-dd604a3db837-console-config\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.885593 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-audit-policies\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.885832 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e175a7a0-9b51-4b5d-b85a-dd604a3db837-console-serving-cert\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.886122 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e175a7a0-9b51-4b5d-b85a-dd604a3db837-service-ca\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.886351 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/19fcd464-915f-4883-8da8-c4dffba0bbbd-encryption-config\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.886249 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.886810 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6410ce59-323e-498a-b4f6-fe662a4c2d9b-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-9rvxw\" (UID: \"6410ce59-323e-498a-b4f6-fe662a4c2d9b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.887463 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.887663 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/19fcd464-915f-4883-8da8-c4dffba0bbbd-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.887872 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.888205 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.888332 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/210c8180-5efd-403d-bc10-32004b40c0dc-encryption-config\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.888399 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/210c8180-5efd-403d-bc10-32004b40c0dc-trusted-ca-bundle\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.888643 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/754d2239-a1f1-4950-af6d-5f18fcc9b2db-serving-cert\") pod \"console-operator-67c89758df-7q49w\" (UID: \"754d2239-a1f1-4950-af6d-5f18fcc9b2db\") " pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.888883 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.889539 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.890305 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19fcd464-915f-4883-8da8-c4dffba0bbbd-serving-cert\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.890432 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.890568 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.890705 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.896949 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.902476 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.922050 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.923011 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.923041 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.923141 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.926727 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.927405 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.932558 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.932700 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.934887 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.935060 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.936977 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.937169 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.941130 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-kmdd7"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.941230 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.941440 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.943548 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-v5t7z"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.943605 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-kmdd7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.946485 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-4lrgt"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.946569 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.950752 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-p9hpg"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.952059 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-4lrgt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.960949 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.961037 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.961111 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.965262 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-ws944"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.965323 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.967762 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.967885 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-ws944" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.970307 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.970397 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.972937 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.973120 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.975570 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.975702 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.978818 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.978940 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.981141 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.982119 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.982723 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.983764 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec53c21b-b648-4496-882b-64dbb3f54c68-config\") pod \"kube-apiserver-operator-575994946d-w7gg4\" (UID: \"ec53c21b-b648-4496-882b-64dbb3f54c68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.983839 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98f49f4b-546f-43bb-bfa3-c6966837ab7c-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.983878 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bb7829a6-bbd3-49f8-8dc2-8a605fe4b138-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-82k2c\" (UID: \"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.983915 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb7829a6-bbd3-49f8-8dc2-8a605fe4b138-config\") pod \"openshift-controller-manager-operator-686468bdd5-82k2c\" (UID: \"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.983952 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec53c21b-b648-4496-882b-64dbb3f54c68-kube-api-access\") pod \"kube-apiserver-operator-575994946d-w7gg4\" (UID: \"ec53c21b-b648-4496-882b-64dbb3f54c68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.983992 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-etcd-service-ca\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984042 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tq69n\" (UniqueName: \"kubernetes.io/projected/98f49f4b-546f-43bb-bfa3-c6966837ab7c-kube-api-access-tq69n\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984155 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15777a2f-256b-4501-9856-749819a161a9-secret-volume\") pod \"collect-profiles-29420250-8gf2b\" (UID: \"15777a2f-256b-4501-9856-749819a161a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984252 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7adf44ec-4226-407e-85c7-bd8a5d9bbf0d-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-kc9qw\" (UID: \"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984300 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hsn2\" (UniqueName: \"kubernetes.io/projected/7adf44ec-4226-407e-85c7-bd8a5d9bbf0d-kube-api-access-9hsn2\") pod \"ingress-operator-6b9cb4dbcf-kc9qw\" (UID: \"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984351 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98f49f4b-546f-43bb-bfa3-c6966837ab7c-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984380 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t6qwr\" (UniqueName: \"kubernetes.io/projected/bb7829a6-bbd3-49f8-8dc2-8a605fe4b138-kube-api-access-t6qwr\") pod \"openshift-controller-manager-operator-686468bdd5-82k2c\" (UID: \"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984405 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15777a2f-256b-4501-9856-749819a161a9-config-volume\") pod \"collect-profiles-29420250-8gf2b\" (UID: \"15777a2f-256b-4501-9856-749819a161a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984454 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-serving-cert\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984477 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-config\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984506 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb7829a6-bbd3-49f8-8dc2-8a605fe4b138-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-82k2c\" (UID: \"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984528 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34f5e653-2a78-42fa-ae6e-776dcc6fb3a7-tmpfs\") pod \"olm-operator-5cdf44d969-nrr58\" (UID: \"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984549 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3fcce943-40d4-4ee8-aabb-7754a1bde5bc-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-fpwm7\" (UID: \"3fcce943-40d4-4ee8-aabb-7754a1bde5bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984609 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec53c21b-b648-4496-882b-64dbb3f54c68-serving-cert\") pod \"kube-apiserver-operator-575994946d-w7gg4\" (UID: \"ec53c21b-b648-4496-882b-64dbb3f54c68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984653 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-tmp-dir\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984674 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7rdv\" (UniqueName: \"kubernetes.io/projected/34f5e653-2a78-42fa-ae6e-776dcc6fb3a7-kube-api-access-x7rdv\") pod \"olm-operator-5cdf44d969-nrr58\" (UID: \"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984703 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7adf44ec-4226-407e-85c7-bd8a5d9bbf0d-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-kc9qw\" (UID: \"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984732 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b0063af-3ff2-4e04-81f7-56971d792d20-serving-cert\") pod \"openshift-config-operator-5777786469-hc5xj\" (UID: \"9b0063af-3ff2-4e04-81f7-56971d792d20\") " pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984743 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-etcd-service-ca\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984743 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb7829a6-bbd3-49f8-8dc2-8a605fe4b138-config\") pod \"openshift-controller-manager-operator-686468bdd5-82k2c\" (UID: \"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984760 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/500cfd87-2e0f-4321-a7d5-f19d851aafc9-tmp-dir\") pod \"dns-operator-799b87ffcd-cxzx8\" (UID: \"500cfd87-2e0f-4321-a7d5-f19d851aafc9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984788 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tk4mk\" (UniqueName: \"kubernetes.io/projected/500cfd87-2e0f-4321-a7d5-f19d851aafc9-kube-api-access-tk4mk\") pod \"dns-operator-799b87ffcd-cxzx8\" (UID: \"500cfd87-2e0f-4321-a7d5-f19d851aafc9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984809 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fcce943-40d4-4ee8-aabb-7754a1bde5bc-config\") pod \"kube-controller-manager-operator-69d5f845f8-fpwm7\" (UID: \"3fcce943-40d4-4ee8-aabb-7754a1bde5bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984840 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/98f49f4b-546f-43bb-bfa3-c6966837ab7c-tmp\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984858 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-etcd-client\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984873 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/34f5e653-2a78-42fa-ae6e-776dcc6fb3a7-srv-cert\") pod \"olm-operator-5cdf44d969-nrr58\" (UID: \"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984899 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fcce943-40d4-4ee8-aabb-7754a1bde5bc-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-fpwm7\" (UID: \"3fcce943-40d4-4ee8-aabb-7754a1bde5bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.984936 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54k67\" (UniqueName: \"kubernetes.io/projected/15777a2f-256b-4501-9856-749819a161a9-kube-api-access-54k67\") pod \"collect-profiles-29420250-8gf2b\" (UID: \"15777a2f-256b-4501-9856-749819a161a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985258 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/500cfd87-2e0f-4321-a7d5-f19d851aafc9-tmp-dir\") pod \"dns-operator-799b87ffcd-cxzx8\" (UID: \"500cfd87-2e0f-4321-a7d5-f19d851aafc9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985258 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-tmp-dir\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985363 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-config\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985463 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/500cfd87-2e0f-4321-a7d5-f19d851aafc9-metrics-tls\") pod \"dns-operator-799b87ffcd-cxzx8\" (UID: \"500cfd87-2e0f-4321-a7d5-f19d851aafc9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985515 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/98f49f4b-546f-43bb-bfa3-c6966837ab7c-tmp\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985528 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7adf44ec-4226-407e-85c7-bd8a5d9bbf0d-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-kc9qw\" (UID: \"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985611 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-etcd-ca\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985639 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zvcvt\" (UniqueName: \"kubernetes.io/projected/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-kube-api-access-zvcvt\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985671 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/98f49f4b-546f-43bb-bfa3-c6966837ab7c-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985690 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fcce943-40d4-4ee8-aabb-7754a1bde5bc-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-fpwm7\" (UID: \"3fcce943-40d4-4ee8-aabb-7754a1bde5bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985714 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ec53c21b-b648-4496-882b-64dbb3f54c68-tmp-dir\") pod \"kube-apiserver-operator-575994946d-w7gg4\" (UID: \"ec53c21b-b648-4496-882b-64dbb3f54c68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985732 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9b0063af-3ff2-4e04-81f7-56971d792d20-available-featuregates\") pod \"openshift-config-operator-5777786469-hc5xj\" (UID: \"9b0063af-3ff2-4e04-81f7-56971d792d20\") " pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985782 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfhf6\" (UniqueName: \"kubernetes.io/projected/9b0063af-3ff2-4e04-81f7-56971d792d20-kube-api-access-xfhf6\") pod \"openshift-config-operator-5777786469-hc5xj\" (UID: \"9b0063af-3ff2-4e04-81f7-56971d792d20\") " pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985875 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/98f49f4b-546f-43bb-bfa3-c6966837ab7c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.985915 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/34f5e653-2a78-42fa-ae6e-776dcc6fb3a7-profile-collector-cert\") pod \"olm-operator-5cdf44d969-nrr58\" (UID: \"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.986007 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/98f49f4b-546f-43bb-bfa3-c6966837ab7c-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.986119 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-etcd-ca\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.986519 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98f49f4b-546f-43bb-bfa3-c6966837ab7c-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.986703 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bb7829a6-bbd3-49f8-8dc2-8a605fe4b138-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-82k2c\" (UID: \"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.987808 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-p8dgq"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.987843 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-9rvxw"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.987865 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.987940 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-serving-cert\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.988037 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.989055 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb7829a6-bbd3-49f8-8dc2-8a605fe4b138-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-82k2c\" (UID: \"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.989603 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-etcd-client\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.990581 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/98f49f4b-546f-43bb-bfa3-c6966837ab7c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.992318 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/500cfd87-2e0f-4321-a7d5-f19d851aafc9-metrics-tls\") pod \"dns-operator-799b87ffcd-cxzx8\" (UID: \"500cfd87-2e0f-4321-a7d5-f19d851aafc9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995419 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-n6jr7"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995450 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-7q49w"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995463 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995476 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-cxzx8"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995583 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-gv282"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995594 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995602 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995719 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-hc5xj"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995734 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995749 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995764 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-m2rqt"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995778 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-74dth"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995792 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995806 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995832 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995851 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-v5t7z"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995866 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995879 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995896 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995909 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995923 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995937 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995948 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-4lrgt"] Dec 08 17:42:34 crc kubenswrapper[5112]: I1208 17:42:34.995961 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p4h9p"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.000272 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-kmdd7"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.000301 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.000315 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.000329 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-z5hzx"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.000358 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.004165 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fttjc"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.004385 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.007573 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.007609 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-vpxb8"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.007626 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.007639 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.007651 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-ws944"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.007665 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z5hzx"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.007676 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.007688 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.007700 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.007755 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fttjc" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.007797 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p4h9p"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.007820 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.007845 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hjh5k"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.010798 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-6gxxt"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.010934 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hjh5k" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.012885 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hjh5k"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.013007 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.037908 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ljt8\" (UniqueName: \"kubernetes.io/projected/754d2239-a1f1-4950-af6d-5f18fcc9b2db-kube-api-access-6ljt8\") pod \"console-operator-67c89758df-7q49w\" (UID: \"754d2239-a1f1-4950-af6d-5f18fcc9b2db\") " pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.066991 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vl89\" (UniqueName: \"kubernetes.io/projected/82cb6e24-6805-46a7-8f49-7d48eb8684fe-kube-api-access-7vl89\") pod \"cluster-samples-operator-6b564684c8-rc5qq\" (UID: \"82cb6e24-6805-46a7-8f49-7d48eb8684fe\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.078389 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4brt\" (UniqueName: \"kubernetes.io/projected/19fcd464-915f-4883-8da8-c4dffba0bbbd-kube-api-access-j4brt\") pod \"apiserver-9ddfb9f55-n6jr7\" (UID: \"19fcd464-915f-4883-8da8-c4dffba0bbbd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087065 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34f5e653-2a78-42fa-ae6e-776dcc6fb3a7-tmpfs\") pod \"olm-operator-5cdf44d969-nrr58\" (UID: \"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087115 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3fcce943-40d4-4ee8-aabb-7754a1bde5bc-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-fpwm7\" (UID: \"3fcce943-40d4-4ee8-aabb-7754a1bde5bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087133 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec53c21b-b648-4496-882b-64dbb3f54c68-serving-cert\") pod \"kube-apiserver-operator-575994946d-w7gg4\" (UID: \"ec53c21b-b648-4496-882b-64dbb3f54c68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087157 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x7rdv\" (UniqueName: \"kubernetes.io/projected/34f5e653-2a78-42fa-ae6e-776dcc6fb3a7-kube-api-access-x7rdv\") pod \"olm-operator-5cdf44d969-nrr58\" (UID: \"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087177 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7adf44ec-4226-407e-85c7-bd8a5d9bbf0d-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-kc9qw\" (UID: \"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087409 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b0063af-3ff2-4e04-81f7-56971d792d20-serving-cert\") pod \"openshift-config-operator-5777786469-hc5xj\" (UID: \"9b0063af-3ff2-4e04-81f7-56971d792d20\") " pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087434 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fcce943-40d4-4ee8-aabb-7754a1bde5bc-config\") pod \"kube-controller-manager-operator-69d5f845f8-fpwm7\" (UID: \"3fcce943-40d4-4ee8-aabb-7754a1bde5bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087456 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/34f5e653-2a78-42fa-ae6e-776dcc6fb3a7-srv-cert\") pod \"olm-operator-5cdf44d969-nrr58\" (UID: \"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087466 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/34f5e653-2a78-42fa-ae6e-776dcc6fb3a7-tmpfs\") pod \"olm-operator-5cdf44d969-nrr58\" (UID: \"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087481 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fcce943-40d4-4ee8-aabb-7754a1bde5bc-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-fpwm7\" (UID: \"3fcce943-40d4-4ee8-aabb-7754a1bde5bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087523 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-54k67\" (UniqueName: \"kubernetes.io/projected/15777a2f-256b-4501-9856-749819a161a9-kube-api-access-54k67\") pod \"collect-profiles-29420250-8gf2b\" (UID: \"15777a2f-256b-4501-9856-749819a161a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087559 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7adf44ec-4226-407e-85c7-bd8a5d9bbf0d-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-kc9qw\" (UID: \"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087639 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fcce943-40d4-4ee8-aabb-7754a1bde5bc-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-fpwm7\" (UID: \"3fcce943-40d4-4ee8-aabb-7754a1bde5bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087663 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ec53c21b-b648-4496-882b-64dbb3f54c68-tmp-dir\") pod \"kube-apiserver-operator-575994946d-w7gg4\" (UID: \"ec53c21b-b648-4496-882b-64dbb3f54c68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087679 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9b0063af-3ff2-4e04-81f7-56971d792d20-available-featuregates\") pod \"openshift-config-operator-5777786469-hc5xj\" (UID: \"9b0063af-3ff2-4e04-81f7-56971d792d20\") " pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087696 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xfhf6\" (UniqueName: \"kubernetes.io/projected/9b0063af-3ff2-4e04-81f7-56971d792d20-kube-api-access-xfhf6\") pod \"openshift-config-operator-5777786469-hc5xj\" (UID: \"9b0063af-3ff2-4e04-81f7-56971d792d20\") " pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087718 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/34f5e653-2a78-42fa-ae6e-776dcc6fb3a7-profile-collector-cert\") pod \"olm-operator-5cdf44d969-nrr58\" (UID: \"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087753 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec53c21b-b648-4496-882b-64dbb3f54c68-config\") pod \"kube-apiserver-operator-575994946d-w7gg4\" (UID: \"ec53c21b-b648-4496-882b-64dbb3f54c68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087762 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3fcce943-40d4-4ee8-aabb-7754a1bde5bc-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-fpwm7\" (UID: \"3fcce943-40d4-4ee8-aabb-7754a1bde5bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087782 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec53c21b-b648-4496-882b-64dbb3f54c68-kube-api-access\") pod \"kube-apiserver-operator-575994946d-w7gg4\" (UID: \"ec53c21b-b648-4496-882b-64dbb3f54c68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087883 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15777a2f-256b-4501-9856-749819a161a9-secret-volume\") pod \"collect-profiles-29420250-8gf2b\" (UID: \"15777a2f-256b-4501-9856-749819a161a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087908 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7adf44ec-4226-407e-85c7-bd8a5d9bbf0d-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-kc9qw\" (UID: \"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087928 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9hsn2\" (UniqueName: \"kubernetes.io/projected/7adf44ec-4226-407e-85c7-bd8a5d9bbf0d-kube-api-access-9hsn2\") pod \"ingress-operator-6b9cb4dbcf-kc9qw\" (UID: \"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.087964 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15777a2f-256b-4501-9856-749819a161a9-config-volume\") pod \"collect-profiles-29420250-8gf2b\" (UID: \"15777a2f-256b-4501-9856-749819a161a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.088171 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9b0063af-3ff2-4e04-81f7-56971d792d20-available-featuregates\") pod \"openshift-config-operator-5777786469-hc5xj\" (UID: \"9b0063af-3ff2-4e04-81f7-56971d792d20\") " pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.088362 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ec53c21b-b648-4496-882b-64dbb3f54c68-tmp-dir\") pod \"kube-apiserver-operator-575994946d-w7gg4\" (UID: \"ec53c21b-b648-4496-882b-64dbb3f54c68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.096105 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl9hl\" (UniqueName: \"kubernetes.io/projected/d032b7b0-4a86-448c-b592-dd1633f1152e-kube-api-access-wl9hl\") pod \"openshift-apiserver-operator-846cbfc458-f7bpv\" (UID: \"d032b7b0-4a86-448c-b592-dd1633f1152e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.100469 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.115266 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.117160 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn55l\" (UniqueName: \"kubernetes.io/projected/210c8180-5efd-403d-bc10-32004b40c0dc-kube-api-access-mn55l\") pod \"apiserver-8596bd845d-8k2zp\" (UID: \"210c8180-5efd-403d-bc10-32004b40c0dc\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.139758 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwq8t\" (UniqueName: \"kubernetes.io/projected/af065ece-a0e6-49a0-ba5e-21875f49cbd2-kube-api-access-jwq8t\") pod \"route-controller-manager-776cdc94d6-k5crt\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.159362 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvznq\" (UniqueName: \"kubernetes.io/projected/6410ce59-323e-498a-b4f6-fe662a4c2d9b-kube-api-access-kvznq\") pod \"machine-api-operator-755bb95488-9rvxw\" (UID: \"6410ce59-323e-498a-b4f6-fe662a4c2d9b\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.161324 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.172949 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b0063af-3ff2-4e04-81f7-56971d792d20-serving-cert\") pod \"openshift-config-operator-5777786469-hc5xj\" (UID: \"9b0063af-3ff2-4e04-81f7-56971d792d20\") " pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.181477 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.203035 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.224877 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.259337 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-865bw\" (UniqueName: \"kubernetes.io/projected/e175a7a0-9b51-4b5d-b85a-dd604a3db837-kube-api-access-865bw\") pod \"console-64d44f6ddf-m2rqt\" (UID: \"e175a7a0-9b51-4b5d-b85a-dd604a3db837\") " pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.278951 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhplh\" (UniqueName: \"kubernetes.io/projected/de7615f0-5173-4b64-8f4d-ba4da37884b6-kube-api-access-nhplh\") pod \"oauth-openshift-66458b6674-74dth\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.288681 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.294984 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6qgc\" (UniqueName: \"kubernetes.io/projected/3b27b80a-df1a-4a29-82d6-384db5b6612e-kube-api-access-r6qgc\") pod \"downloads-747b44746d-gv282\" (UID: \"3b27b80a-df1a-4a29-82d6-384db5b6612e\") " pod="openshift-console/downloads-747b44746d-gv282" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.316613 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.319345 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2mq6\" (UniqueName: \"kubernetes.io/projected/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-kube-api-access-l2mq6\") pod \"controller-manager-65b6cccf98-p8dgq\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.324891 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-7q49w"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.334561 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b42st\" (UniqueName: \"kubernetes.io/projected/280972fd-54d5-4bd4-824f-6e5d16f77f21-kube-api-access-b42st\") pod \"authentication-operator-7f5c659b84-sshdm\" (UID: \"280972fd-54d5-4bd4-824f-6e5d16f77f21\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.337158 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.342517 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.348208 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fcce943-40d4-4ee8-aabb-7754a1bde5bc-config\") pod \"kube-controller-manager-operator-69d5f845f8-fpwm7\" (UID: \"3fcce943-40d4-4ee8-aabb-7754a1bde5bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.354257 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.361198 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.368403 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.378496 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.381719 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.386415 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.402723 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.411748 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fcce943-40d4-4ee8-aabb-7754a1bde5bc-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-fpwm7\" (UID: \"3fcce943-40d4-4ee8-aabb-7754a1bde5bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.421278 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.428503 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-gv282" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.441443 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.443057 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.457292 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-9rvxw"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.461180 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.481472 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.502053 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.528427 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.543239 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.547300 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec53c21b-b648-4496-882b-64dbb3f54c68-serving-cert\") pod \"kube-apiserver-operator-575994946d-w7gg4\" (UID: \"ec53c21b-b648-4496-882b-64dbb3f54c68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.551187 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec53c21b-b648-4496-882b-64dbb3f54c68-config\") pod \"kube-apiserver-operator-575994946d-w7gg4\" (UID: \"ec53c21b-b648-4496-882b-64dbb3f54c68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.563569 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.576367 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.581898 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.589556 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15777a2f-256b-4501-9856-749819a161a9-config-volume\") pod \"collect-profiles-29420250-8gf2b\" (UID: \"15777a2f-256b-4501-9856-749819a161a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.601074 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.617126 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15777a2f-256b-4501-9856-749819a161a9-secret-volume\") pod \"collect-profiles-29420250-8gf2b\" (UID: \"15777a2f-256b-4501-9856-749819a161a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.618106 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/34f5e653-2a78-42fa-ae6e-776dcc6fb3a7-profile-collector-cert\") pod \"olm-operator-5cdf44d969-nrr58\" (UID: \"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.621681 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.625075 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.643303 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.662002 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.677764 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-m2rqt"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.678514 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.681443 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.702540 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.713646 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7adf44ec-4226-407e-85c7-bd8a5d9bbf0d-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-kc9qw\" (UID: \"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.719112 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-74dth"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.721494 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.729564 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-gv282"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.749864 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.760280 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7adf44ec-4226-407e-85c7-bd8a5d9bbf0d-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-kc9qw\" (UID: \"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.761379 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.770533 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-p8dgq"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.780848 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.791330 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/34f5e653-2a78-42fa-ae6e-776dcc6fb3a7-srv-cert\") pod \"olm-operator-5cdf44d969-nrr58\" (UID: \"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.796418 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-n6jr7"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.810829 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt"] Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.811845 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp"] Dec 08 17:42:35 crc kubenswrapper[5112]: W1208 17:42:35.811941 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19fcd464_915f_4883_8da8_c4dffba0bbbd.slice/crio-3709097f4f6ccbf85b95a480553979bfb0fee55bf5886ca3fd4e7c54bbdc3141 WatchSource:0}: Error finding container 3709097f4f6ccbf85b95a480553979bfb0fee55bf5886ca3fd4e7c54bbdc3141: Status 404 returned error can't find the container with id 3709097f4f6ccbf85b95a480553979bfb0fee55bf5886ca3fd4e7c54bbdc3141 Dec 08 17:42:35 crc kubenswrapper[5112]: W1208 17:42:35.818411 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod210c8180_5efd_403d_bc10_32004b40c0dc.slice/crio-2c4c33d22eead7138ae8bd9ff716bd8b4dae8b2853df6bc0440c78836a370521 WatchSource:0}: Error finding container 2c4c33d22eead7138ae8bd9ff716bd8b4dae8b2853df6bc0440c78836a370521: Status 404 returned error can't find the container with id 2c4c33d22eead7138ae8bd9ff716bd8b4dae8b2853df6bc0440c78836a370521 Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.821907 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.834822 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm"] Dec 08 17:42:35 crc kubenswrapper[5112]: W1208 17:42:35.840112 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod280972fd_54d5_4bd4_824f_6e5d16f77f21.slice/crio-3c81b13c92dae3c65500eae9c0b853a5c5f8bf660eed27354d5fc80bc698c749 WatchSource:0}: Error finding container 3c81b13c92dae3c65500eae9c0b853a5c5f8bf660eed27354d5fc80bc698c749: Status 404 returned error can't find the container with id 3c81b13c92dae3c65500eae9c0b853a5c5f8bf660eed27354d5fc80bc698c749 Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.841850 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.863184 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.882195 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.901558 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.922216 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.942069 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.946964 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-74dth" event={"ID":"de7615f0-5173-4b64-8f4d-ba4da37884b6","Type":"ContainerStarted","Data":"44bcde4355845cae0a794e490fbed911fe5c3f32e16149ef3aa2a1a60f583ced"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.948511 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-gv282" event={"ID":"3b27b80a-df1a-4a29-82d6-384db5b6612e","Type":"ContainerStarted","Data":"d6c20ba25e66a2b3f525906a93976cf1412936d9f73257ec8a4bb6740d034b2e"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.949907 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" event={"ID":"280972fd-54d5-4bd4-824f-6e5d16f77f21","Type":"ContainerStarted","Data":"3c81b13c92dae3c65500eae9c0b853a5c5f8bf660eed27354d5fc80bc698c749"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.950936 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" event={"ID":"210c8180-5efd-403d-bc10-32004b40c0dc","Type":"ContainerStarted","Data":"2c4c33d22eead7138ae8bd9ff716bd8b4dae8b2853df6bc0440c78836a370521"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.951640 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" event={"ID":"19fcd464-915f-4883-8da8-c4dffba0bbbd","Type":"ContainerStarted","Data":"3709097f4f6ccbf85b95a480553979bfb0fee55bf5886ca3fd4e7c54bbdc3141"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.952713 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" event={"ID":"6410ce59-323e-498a-b4f6-fe662a4c2d9b","Type":"ContainerStarted","Data":"5587b4ed321958ec3e2eb95c7aafb4c714ca9abd826c67e034da1bc4b56e3d6f"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.952734 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" event={"ID":"6410ce59-323e-498a-b4f6-fe662a4c2d9b","Type":"ContainerStarted","Data":"7c0756a87e4e473d23f6fb1812ada418b746cd782aa319e8819b1f6e1bd4f93d"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.953409 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" event={"ID":"af065ece-a0e6-49a0-ba5e-21875f49cbd2","Type":"ContainerStarted","Data":"3bdf98e97399ca990caf022ca5b6064eafe9508bbf731ffed0191fffd9f51a21"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.954036 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" event={"ID":"7e0c9c4f-1216-499b-a1dd-be2f225cb97f","Type":"ContainerStarted","Data":"fe3b848f7fe53c06f5adbf2122fa10ce4d42c7769bd30cac5abc5d4c1d8e5b5d"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.954769 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-m2rqt" event={"ID":"e175a7a0-9b51-4b5d-b85a-dd604a3db837","Type":"ContainerStarted","Data":"5a1a06a8209782f565c34b54fa89984d3769f88ceccbfe784ea21a5019923aff"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.955373 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" event={"ID":"d032b7b0-4a86-448c-b592-dd1633f1152e","Type":"ContainerStarted","Data":"bcc7d22c72cb3262b54b06809d51b33a7e4ff652b49339f8c29ea5b3c64962dc"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.956439 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq" event={"ID":"82cb6e24-6805-46a7-8f49-7d48eb8684fe","Type":"ContainerStarted","Data":"cb93d4b476911b2b0282a97c85b17a4783b398e05269fdb126875944ea6630b1"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.956458 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq" event={"ID":"82cb6e24-6805-46a7-8f49-7d48eb8684fe","Type":"ContainerStarted","Data":"cc7872813ab625c4f94f9b6963e16553dc59120b098d253b22967a37095f45c6"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.956466 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq" event={"ID":"82cb6e24-6805-46a7-8f49-7d48eb8684fe","Type":"ContainerStarted","Data":"e4d439518dd81666719ff4242452182d14b126f74a4501faf56be73713aa3a5f"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.957128 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-7q49w" event={"ID":"754d2239-a1f1-4950-af6d-5f18fcc9b2db","Type":"ContainerStarted","Data":"8f1822610130f67806038d606e81508d4c461544d4775f8194e8febb3330fe47"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.957147 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-7q49w" event={"ID":"754d2239-a1f1-4950-af6d-5f18fcc9b2db","Type":"ContainerStarted","Data":"05719ee0059365bbfaf621e60418fcc771e923d64d7f9744924ee2a4583843a0"} Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.959557 5112 request.go:752] "Waited before sending request" delay="1.012807141s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.961592 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 17:42:35 crc kubenswrapper[5112]: I1208 17:42:35.981995 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.001424 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.022843 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.035629 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.037501 5112 patch_prober.go:28] interesting pod/console-operator-67c89758df-7q49w container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.037599 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-7q49w" podUID="754d2239-a1f1-4950-af6d-5f18fcc9b2db" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.050801 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.063339 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.082105 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.105324 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.123147 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.141918 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.161141 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.181718 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.201502 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.221764 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.241540 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.262124 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.282808 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.306597 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.321957 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.341688 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.362310 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.382917 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.402735 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.421888 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.441803 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.462696 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.482016 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.502612 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.522552 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.544099 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.561095 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.582156 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.602198 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.621301 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.640713 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.662293 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.683372 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.730475 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/98f49f4b-546f-43bb-bfa3-c6966837ab7c-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.744829 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq69n\" (UniqueName: \"kubernetes.io/projected/98f49f4b-546f-43bb-bfa3-c6966837ab7c-kube-api-access-tq69n\") pod \"cluster-image-registry-operator-86c45576b9-qkblt\" (UID: \"98f49f4b-546f-43bb-bfa3-c6966837ab7c\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.759852 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6qwr\" (UniqueName: \"kubernetes.io/projected/bb7829a6-bbd3-49f8-8dc2-8a605fe4b138-kube-api-access-t6qwr\") pod \"openshift-controller-manager-operator-686468bdd5-82k2c\" (UID: \"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.782642 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk4mk\" (UniqueName: \"kubernetes.io/projected/500cfd87-2e0f-4321-a7d5-f19d851aafc9-kube-api-access-tk4mk\") pod \"dns-operator-799b87ffcd-cxzx8\" (UID: \"500cfd87-2e0f-4321-a7d5-f19d851aafc9\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.804204 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.804210 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvcvt\" (UniqueName: \"kubernetes.io/projected/5543e4e6-3fcd-4469-961d-5e3ed283f0dd-kube-api-access-zvcvt\") pod \"etcd-operator-69b85846b6-h7sbx\" (UID: \"5543e4e6-3fcd-4469-961d-5e3ed283f0dd\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.822482 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.843646 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.861566 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.882016 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.902733 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.922072 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.942823 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.960179 5112 request.go:752] "Waited before sending request" delay="1.959616635s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.962072 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.963416 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.969108 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.975208 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" event={"ID":"7e0c9c4f-1216-499b-a1dd-be2f225cb97f","Type":"ContainerStarted","Data":"3f8b95e90c456d5575829342acae5ef665f0c95e88f2e8e46d21e35baa84de6a"} Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.975575 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.975654 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.982825 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.982861 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-m2rqt" event={"ID":"e175a7a0-9b51-4b5d-b85a-dd604a3db837","Type":"ContainerStarted","Data":"2974e7d7be8e32735fb613714b8ecc542d9109b1812f25b9f88a6cf40cd1a5fa"} Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.985874 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" event={"ID":"d032b7b0-4a86-448c-b592-dd1633f1152e","Type":"ContainerStarted","Data":"ddaba2c07f0c235646fa9304d45fa324716f67dba6d0656be1e89b30d8aceceb"} Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.987686 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-74dth" event={"ID":"de7615f0-5173-4b64-8f4d-ba4da37884b6","Type":"ContainerStarted","Data":"1d0efa609fef276c4506be8fe082e9dc4c3eff1473648ae8af120fa2561e02f8"} Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.988284 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.991157 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-gv282" event={"ID":"3b27b80a-df1a-4a29-82d6-384db5b6612e","Type":"ContainerStarted","Data":"8c554403df643dcdc597ab990baa0800f1e9129d291220bb1002a3de6c9d5c20"} Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.992221 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-gv282" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.994582 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" event={"ID":"280972fd-54d5-4bd4-824f-6e5d16f77f21","Type":"ContainerStarted","Data":"9bde89cf6aa783128612321b586c34ded9a9b6098baee6bd7a2c8602921175af"} Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.996416 5112 patch_prober.go:28] interesting pod/downloads-747b44746d-gv282 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.996505 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-gv282" podUID="3b27b80a-df1a-4a29-82d6-384db5b6612e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.997699 5112 generic.go:358] "Generic (PLEG): container finished" podID="210c8180-5efd-403d-bc10-32004b40c0dc" containerID="8852fb2a24d8b44a6856628da3146be02dc696d14b217124221b67b922e2fdf7" exitCode=0 Dec 08 17:42:36 crc kubenswrapper[5112]: I1208 17:42:36.997795 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" event={"ID":"210c8180-5efd-403d-bc10-32004b40c0dc","Type":"ContainerDied","Data":"8852fb2a24d8b44a6856628da3146be02dc696d14b217124221b67b922e2fdf7"} Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.003675 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.005202 5112 generic.go:358] "Generic (PLEG): container finished" podID="19fcd464-915f-4883-8da8-c4dffba0bbbd" containerID="c78f0f475c2236e22b029936156b339e1c08c53707c39d6007dcf4a6b4927deb" exitCode=0 Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.005345 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" event={"ID":"19fcd464-915f-4883-8da8-c4dffba0bbbd","Type":"ContainerDied","Data":"c78f0f475c2236e22b029936156b339e1c08c53707c39d6007dcf4a6b4927deb"} Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.016884 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.017709 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" event={"ID":"6410ce59-323e-498a-b4f6-fe662a4c2d9b","Type":"ContainerStarted","Data":"a6bd6ae901deb9e472806731a4c61ec25a121462fb51fad54157befd5d9c6fb7"} Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.019381 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" event={"ID":"af065ece-a0e6-49a0-ba5e-21875f49cbd2","Type":"ContainerStarted","Data":"75db4fd4ec545febaf46d652bb3fe582d6fe0aee68f5dbf0f58490bd5d97485d"} Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.045718 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.062940 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.083652 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.098655 5112 patch_prober.go:28] interesting pod/console-operator-67c89758df-7q49w container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.099098 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.104791 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.100628 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-7q49w" podUID="754d2239-a1f1-4950-af6d-5f18fcc9b2db" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.122455 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.145761 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.175929 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.186700 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.208793 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.226603 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.245500 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.293344 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7rdv\" (UniqueName: \"kubernetes.io/projected/34f5e653-2a78-42fa-ae6e-776dcc6fb3a7-kube-api-access-x7rdv\") pod \"olm-operator-5cdf44d969-nrr58\" (UID: \"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.311586 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7adf44ec-4226-407e-85c7-bd8a5d9bbf0d-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-kc9qw\" (UID: \"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.333400 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-54k67\" (UniqueName: \"kubernetes.io/projected/15777a2f-256b-4501-9856-749819a161a9-kube-api-access-54k67\") pod \"collect-profiles-29420250-8gf2b\" (UID: \"15777a2f-256b-4501-9856-749819a161a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.365285 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.367633 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec53c21b-b648-4496-882b-64dbb3f54c68-kube-api-access\") pod \"kube-apiserver-operator-575994946d-w7gg4\" (UID: \"ec53c21b-b648-4496-882b-64dbb3f54c68\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.371840 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfhf6\" (UniqueName: \"kubernetes.io/projected/9b0063af-3ff2-4e04-81f7-56971d792d20-kube-api-access-xfhf6\") pod \"openshift-config-operator-5777786469-hc5xj\" (UID: \"9b0063af-3ff2-4e04-81f7-56971d792d20\") " pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.374332 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.375830 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.389424 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-cxzx8"] Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.400510 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hsn2\" (UniqueName: \"kubernetes.io/projected/7adf44ec-4226-407e-85c7-bd8a5d9bbf0d-kube-api-access-9hsn2\") pod \"ingress-operator-6b9cb4dbcf-kc9qw\" (UID: \"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.405627 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fcce943-40d4-4ee8-aabb-7754a1bde5bc-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-fpwm7\" (UID: \"3fcce943-40d4-4ee8-aabb-7754a1bde5bc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.429659 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-registry-certificates\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.429717 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-ca-trust-extracted\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.429747 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-installation-pull-secrets\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.429887 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-bound-sa-token\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.429951 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clc4d\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-kube-api-access-clc4d\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.430002 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-registry-tls\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.430029 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-trusted-ca\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.430060 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: E1208 17:42:37.432311 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.932293521 +0000 UTC m=+134.941842222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.483407 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt"] Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.500507 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c"] Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.501388 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx"] Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531047 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531400 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-registry-tls\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531427 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-registration-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531454 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-bound-sa-token\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531470 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0e51fda2-d38e-45fc-aa7a-14fe47e53037-machine-approver-tls\") pod \"machine-approver-54c688565-zcf9f\" (UID: \"0e51fda2-d38e-45fc-aa7a-14fe47e53037\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531487 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxrkt\" (UniqueName: \"kubernetes.io/projected/3227aa65-bab5-40ec-9da8-eeadf9187a30-kube-api-access-nxrkt\") pod \"service-ca-operator-5b9c976747-xvpqj\" (UID: \"3227aa65-bab5-40ec-9da8-eeadf9187a30\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531559 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1258bd8-1206-44b0-8eba-2d2ed9e8dc42-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2kppn\" (UID: \"a1258bd8-1206-44b0-8eba-2d2ed9e8dc42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531592 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-plugins-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531620 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dt8g\" (UniqueName: \"kubernetes.io/projected/b9deecbe-7b73-4e1a-8cca-ac79d53ae30f-kube-api-access-5dt8g\") pod \"kube-storage-version-migrator-operator-565b79b866-qpqxs\" (UID: \"b9deecbe-7b73-4e1a-8cca-ac79d53ae30f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531637 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7203b67e-ad3c-4af4-905c-eb6c92ceeed3-images\") pod \"machine-config-operator-67c9d58cbb-xwx7k\" (UID: \"7203b67e-ad3c-4af4-905c-eb6c92ceeed3\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531662 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xbl2\" (UniqueName: \"kubernetes.io/projected/086a07f6-6e0f-4332-8724-d29c680a0ae5-kube-api-access-6xbl2\") pod \"migrator-866fcbc849-4lrgt\" (UID: \"086a07f6-6e0f-4332-8724-d29c680a0ae5\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-4lrgt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531690 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-clc4d\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-kube-api-access-clc4d\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531706 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3227aa65-bab5-40ec-9da8-eeadf9187a30-config\") pod \"service-ca-operator-5b9c976747-xvpqj\" (UID: \"3227aa65-bab5-40ec-9da8-eeadf9187a30\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531738 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-trusted-ca\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531755 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d4012bb8-5470-4545-9344-50a74df66572-metrics-tls\") pod \"dns-default-z5hzx\" (UID: \"d4012bb8-5470-4545-9344-50a74df66572\") " pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531783 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9deecbe-7b73-4e1a-8cca-ac79d53ae30f-config\") pod \"kube-storage-version-migrator-operator-565b79b866-qpqxs\" (UID: \"b9deecbe-7b73-4e1a-8cca-ac79d53ae30f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531800 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/02b6f45a-2d25-4712-b127-c1906f6fb154-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-v5t7z\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531816 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsl7t\" (UniqueName: \"kubernetes.io/projected/cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd-kube-api-access-hsl7t\") pod \"packageserver-7d4fc7d867-ms6jw\" (UID: \"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531834 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02b6f45a-2d25-4712-b127-c1906f6fb154-tmp\") pod \"marketplace-operator-547dbd544d-v5t7z\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531850 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr86k\" (UniqueName: \"kubernetes.io/projected/9dd1e913-a30b-4f99-884f-db1d9526f7f5-kube-api-access-cr86k\") pod \"catalog-operator-75ff9f647d-j76sm\" (UID: \"9dd1e913-a30b-4f99-884f-db1d9526f7f5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531875 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc48g\" (UniqueName: \"kubernetes.io/projected/65a17c30-dc44-43d4-8563-e5161462458c-kube-api-access-kc48g\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531889 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd-apiservice-cert\") pod \"packageserver-7d4fc7d867-ms6jw\" (UID: \"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531903 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/234e7e70-7bb6-457f-a170-f1349602c58a-ready\") pod \"cni-sysctl-allowlist-ds-6gxxt\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531937 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65a17c30-dc44-43d4-8563-e5161462458c-metrics-certs\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531953 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e51fda2-d38e-45fc-aa7a-14fe47e53037-auth-proxy-config\") pod \"machine-approver-54c688565-zcf9f\" (UID: \"0e51fda2-d38e-45fc-aa7a-14fe47e53037\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.531986 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-socket-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.532035 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02b6f45a-2d25-4712-b127-c1906f6fb154-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-v5t7z\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:37 crc kubenswrapper[5112]: E1208 17:42:37.533967 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.033948695 +0000 UTC m=+135.043497396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534097 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9dd1e913-a30b-4f99-884f-db1d9526f7f5-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-j76sm\" (UID: \"9dd1e913-a30b-4f99-884f-db1d9526f7f5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534171 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/234e7e70-7bb6-457f-a170-f1349602c58a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-6gxxt\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534354 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-fmwqx\" (UID: \"04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534441 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/136903e0-14bd-4e29-afb8-d552dc8eb9af-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-mpfh5\" (UID: \"136903e0-14bd-4e29-afb8-d552dc8eb9af\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534477 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9dd1e913-a30b-4f99-884f-db1d9526f7f5-tmpfs\") pod \"catalog-operator-75ff9f647d-j76sm\" (UID: \"9dd1e913-a30b-4f99-884f-db1d9526f7f5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534501 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/65a17c30-dc44-43d4-8563-e5161462458c-default-certificate\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534540 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dtj8\" (UniqueName: \"kubernetes.io/projected/ace9dd66-3bc5-4b64-afe3-4f05af28644c-kube-api-access-7dtj8\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534562 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9dd1e913-a30b-4f99-884f-db1d9526f7f5-srv-cert\") pod \"catalog-operator-75ff9f647d-j76sm\" (UID: \"9dd1e913-a30b-4f99-884f-db1d9526f7f5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534587 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/995fc011-e41c-4695-ba5b-5e8709909e28-signing-key\") pod \"service-ca-74545575db-kmdd7\" (UID: \"995fc011-e41c-4695-ba5b-5e8709909e28\") " pod="openshift-service-ca/service-ca-74545575db-kmdd7" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534656 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9deecbe-7b73-4e1a-8cca-ac79d53ae30f-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-qpqxs\" (UID: \"b9deecbe-7b73-4e1a-8cca-ac79d53ae30f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534677 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35b91b6a-32f7-4c13-a156-8b2b45f9e9d0-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-vzm4l\" (UID: \"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534720 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/995fc011-e41c-4695-ba5b-5e8709909e28-signing-cabundle\") pod \"service-ca-74545575db-kmdd7\" (UID: \"995fc011-e41c-4695-ba5b-5e8709909e28\") " pod="openshift-service-ca/service-ca-74545575db-kmdd7" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534746 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdhbv\" (UniqueName: \"kubernetes.io/projected/7203b67e-ad3c-4af4-905c-eb6c92ceeed3-kube-api-access-pdhbv\") pod \"machine-config-operator-67c9d58cbb-xwx7k\" (UID: \"7203b67e-ad3c-4af4-905c-eb6c92ceeed3\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534769 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d4012bb8-5470-4545-9344-50a74df66572-tmp-dir\") pod \"dns-default-z5hzx\" (UID: \"d4012bb8-5470-4545-9344-50a74df66572\") " pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534808 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35b91b6a-32f7-4c13-a156-8b2b45f9e9d0-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-vzm4l\" (UID: \"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534829 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4012bb8-5470-4545-9344-50a74df66572-config-volume\") pod \"dns-default-z5hzx\" (UID: \"d4012bb8-5470-4545-9344-50a74df66572\") " pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534854 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2kh4\" (UniqueName: \"kubernetes.io/projected/d4012bb8-5470-4545-9344-50a74df66572-kube-api-access-g2kh4\") pod \"dns-default-z5hzx\" (UID: \"d4012bb8-5470-4545-9344-50a74df66572\") " pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534900 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3227aa65-bab5-40ec-9da8-eeadf9187a30-serving-cert\") pod \"service-ca-operator-5b9c976747-xvpqj\" (UID: \"3227aa65-bab5-40ec-9da8-eeadf9187a30\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534921 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b57b87c8-9f03-469e-a427-29fc0b5ea61b-node-bootstrap-token\") pod \"machine-config-server-fttjc\" (UID: \"b57b87c8-9f03-469e-a427-29fc0b5ea61b\") " pod="openshift-machine-config-operator/machine-config-server-fttjc" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.534957 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35b91b6a-32f7-4c13-a156-8b2b45f9e9d0-config\") pod \"openshift-kube-scheduler-operator-54f497555d-vzm4l\" (UID: \"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.535017 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktvn8\" (UniqueName: \"kubernetes.io/projected/009d3924-f028-4f36-9c85-df76d4ec0a70-kube-api-access-ktvn8\") pod \"multus-admission-controller-69db94689b-ws944\" (UID: \"009d3924-f028-4f36-9c85-df76d4ec0a70\") " pod="openshift-multus/multus-admission-controller-69db94689b-ws944" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.535042 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gncnd\" (UniqueName: \"kubernetes.io/projected/02b6f45a-2d25-4712-b127-c1906f6fb154-kube-api-access-gncnd\") pod \"marketplace-operator-547dbd544d-v5t7z\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.535066 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7203b67e-ad3c-4af4-905c-eb6c92ceeed3-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-xwx7k\" (UID: \"7203b67e-ad3c-4af4-905c-eb6c92ceeed3\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.536450 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwb9\" (UniqueName: \"kubernetes.io/projected/a1258bd8-1206-44b0-8eba-2d2ed9e8dc42-kube-api-access-5mwb9\") pod \"package-server-manager-77f986bd66-2kppn\" (UID: \"a1258bd8-1206-44b0-8eba-2d2ed9e8dc42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.536505 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-fmwqx\" (UID: \"04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.536526 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv94h\" (UniqueName: \"kubernetes.io/projected/995fc011-e41c-4695-ba5b-5e8709909e28-kube-api-access-rv94h\") pod \"service-ca-74545575db-kmdd7\" (UID: \"995fc011-e41c-4695-ba5b-5e8709909e28\") " pod="openshift-service-ca/service-ca-74545575db-kmdd7" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.536587 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b57b87c8-9f03-469e-a427-29fc0b5ea61b-certs\") pod \"machine-config-server-fttjc\" (UID: \"b57b87c8-9f03-469e-a427-29fc0b5ea61b\") " pod="openshift-machine-config-operator/machine-config-server-fttjc" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.536630 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtp25\" (UniqueName: \"kubernetes.io/projected/0e51fda2-d38e-45fc-aa7a-14fe47e53037-kube-api-access-dtp25\") pod \"machine-approver-54c688565-zcf9f\" (UID: \"0e51fda2-d38e-45fc-aa7a-14fe47e53037\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.536679 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.536752 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdf9f\" (UniqueName: \"kubernetes.io/projected/234e7e70-7bb6-457f-a170-f1349602c58a-kube-api-access-gdf9f\") pod \"cni-sysctl-allowlist-ds-6gxxt\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.536920 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whwgp\" (UniqueName: \"kubernetes.io/projected/a0286138-8763-49b9-b839-a6f8451a42df-kube-api-access-whwgp\") pod \"ingress-canary-hjh5k\" (UID: \"a0286138-8763-49b9-b839-a6f8451a42df\") " pod="openshift-ingress-canary/ingress-canary-hjh5k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.536939 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7203b67e-ad3c-4af4-905c-eb6c92ceeed3-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-xwx7k\" (UID: \"7203b67e-ad3c-4af4-905c-eb6c92ceeed3\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.536983 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd-tmpfs\") pod \"packageserver-7d4fc7d867-ms6jw\" (UID: \"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.537013 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e51fda2-d38e-45fc-aa7a-14fe47e53037-config\") pod \"machine-approver-54c688565-zcf9f\" (UID: \"0e51fda2-d38e-45fc-aa7a-14fe47e53037\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.537144 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-registry-certificates\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.537191 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/234e7e70-7bb6-457f-a170-f1349602c58a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-6gxxt\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.537232 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/35b91b6a-32f7-4c13-a156-8b2b45f9e9d0-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-vzm4l\" (UID: \"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.537275 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-ca-trust-extracted\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.537295 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9dl8\" (UniqueName: \"kubernetes.io/projected/136903e0-14bd-4e29-afb8-d552dc8eb9af-kube-api-access-b9dl8\") pod \"control-plane-machine-set-operator-75ffdb6fcd-mpfh5\" (UID: \"136903e0-14bd-4e29-afb8-d552dc8eb9af\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.537311 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-mountpoint-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.537370 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-installation-pull-secrets\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.554536 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-trusted-ca\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: E1208 17:42:37.555028 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.055006672 +0000 UTC m=+135.064555373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.555825 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-ca-trust-extracted\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.557799 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-registry-tls\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.558044 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-registry-certificates\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.562750 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0286138-8763-49b9-b839-a6f8451a42df-cert\") pod \"ingress-canary-hjh5k\" (UID: \"a0286138-8763-49b9-b839-a6f8451a42df\") " pod="openshift-ingress-canary/ingress-canary-hjh5k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.562978 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/65a17c30-dc44-43d4-8563-e5161462458c-stats-auth\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.563497 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65a17c30-dc44-43d4-8563-e5161462458c-service-ca-bundle\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.563618 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-864f9\" (UniqueName: \"kubernetes.io/projected/04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee-kube-api-access-864f9\") pod \"machine-config-controller-f9cdd68f7-fmwqx\" (UID: \"04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.563779 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8hb\" (UniqueName: \"kubernetes.io/projected/b57b87c8-9f03-469e-a427-29fc0b5ea61b-kube-api-access-6h8hb\") pod \"machine-config-server-fttjc\" (UID: \"b57b87c8-9f03-469e-a427-29fc0b5ea61b\") " pod="openshift-machine-config-operator/machine-config-server-fttjc" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.563885 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/009d3924-f028-4f36-9c85-df76d4ec0a70-webhook-certs\") pod \"multus-admission-controller-69db94689b-ws944\" (UID: \"009d3924-f028-4f36-9c85-df76d4ec0a70\") " pod="openshift-multus/multus-admission-controller-69db94689b-ws944" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.564031 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-csi-data-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.564139 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd-webhook-cert\") pod \"packageserver-7d4fc7d867-ms6jw\" (UID: \"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.568700 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-installation-pull-secrets\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.591372 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.599352 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-clc4d\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-kube-api-access-clc4d\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.600829 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-bound-sa-token\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.625029 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.635457 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.653153 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666392 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666564 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1258bd8-1206-44b0-8eba-2d2ed9e8dc42-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2kppn\" (UID: \"a1258bd8-1206-44b0-8eba-2d2ed9e8dc42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666603 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-plugins-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666630 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5dt8g\" (UniqueName: \"kubernetes.io/projected/b9deecbe-7b73-4e1a-8cca-ac79d53ae30f-kube-api-access-5dt8g\") pod \"kube-storage-version-migrator-operator-565b79b866-qpqxs\" (UID: \"b9deecbe-7b73-4e1a-8cca-ac79d53ae30f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666647 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7203b67e-ad3c-4af4-905c-eb6c92ceeed3-images\") pod \"machine-config-operator-67c9d58cbb-xwx7k\" (UID: \"7203b67e-ad3c-4af4-905c-eb6c92ceeed3\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666665 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6xbl2\" (UniqueName: \"kubernetes.io/projected/086a07f6-6e0f-4332-8724-d29c680a0ae5-kube-api-access-6xbl2\") pod \"migrator-866fcbc849-4lrgt\" (UID: \"086a07f6-6e0f-4332-8724-d29c680a0ae5\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-4lrgt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666692 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3227aa65-bab5-40ec-9da8-eeadf9187a30-config\") pod \"service-ca-operator-5b9c976747-xvpqj\" (UID: \"3227aa65-bab5-40ec-9da8-eeadf9187a30\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666718 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d4012bb8-5470-4545-9344-50a74df66572-metrics-tls\") pod \"dns-default-z5hzx\" (UID: \"d4012bb8-5470-4545-9344-50a74df66572\") " pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666740 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9deecbe-7b73-4e1a-8cca-ac79d53ae30f-config\") pod \"kube-storage-version-migrator-operator-565b79b866-qpqxs\" (UID: \"b9deecbe-7b73-4e1a-8cca-ac79d53ae30f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666755 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/02b6f45a-2d25-4712-b127-c1906f6fb154-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-v5t7z\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666772 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hsl7t\" (UniqueName: \"kubernetes.io/projected/cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd-kube-api-access-hsl7t\") pod \"packageserver-7d4fc7d867-ms6jw\" (UID: \"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666786 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02b6f45a-2d25-4712-b127-c1906f6fb154-tmp\") pod \"marketplace-operator-547dbd544d-v5t7z\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666801 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cr86k\" (UniqueName: \"kubernetes.io/projected/9dd1e913-a30b-4f99-884f-db1d9526f7f5-kube-api-access-cr86k\") pod \"catalog-operator-75ff9f647d-j76sm\" (UID: \"9dd1e913-a30b-4f99-884f-db1d9526f7f5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666816 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kc48g\" (UniqueName: \"kubernetes.io/projected/65a17c30-dc44-43d4-8563-e5161462458c-kube-api-access-kc48g\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666830 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd-apiservice-cert\") pod \"packageserver-7d4fc7d867-ms6jw\" (UID: \"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666847 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/234e7e70-7bb6-457f-a170-f1349602c58a-ready\") pod \"cni-sysctl-allowlist-ds-6gxxt\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666862 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65a17c30-dc44-43d4-8563-e5161462458c-metrics-certs\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666881 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e51fda2-d38e-45fc-aa7a-14fe47e53037-auth-proxy-config\") pod \"machine-approver-54c688565-zcf9f\" (UID: \"0e51fda2-d38e-45fc-aa7a-14fe47e53037\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666900 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-socket-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666931 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02b6f45a-2d25-4712-b127-c1906f6fb154-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-v5t7z\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666955 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9dd1e913-a30b-4f99-884f-db1d9526f7f5-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-j76sm\" (UID: \"9dd1e913-a30b-4f99-884f-db1d9526f7f5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.666974 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/234e7e70-7bb6-457f-a170-f1349602c58a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-6gxxt\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667003 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-fmwqx\" (UID: \"04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667032 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/136903e0-14bd-4e29-afb8-d552dc8eb9af-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-mpfh5\" (UID: \"136903e0-14bd-4e29-afb8-d552dc8eb9af\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667055 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9dd1e913-a30b-4f99-884f-db1d9526f7f5-tmpfs\") pod \"catalog-operator-75ff9f647d-j76sm\" (UID: \"9dd1e913-a30b-4f99-884f-db1d9526f7f5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667138 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/65a17c30-dc44-43d4-8563-e5161462458c-default-certificate\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667160 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7dtj8\" (UniqueName: \"kubernetes.io/projected/ace9dd66-3bc5-4b64-afe3-4f05af28644c-kube-api-access-7dtj8\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667174 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9dd1e913-a30b-4f99-884f-db1d9526f7f5-srv-cert\") pod \"catalog-operator-75ff9f647d-j76sm\" (UID: \"9dd1e913-a30b-4f99-884f-db1d9526f7f5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667189 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/995fc011-e41c-4695-ba5b-5e8709909e28-signing-key\") pod \"service-ca-74545575db-kmdd7\" (UID: \"995fc011-e41c-4695-ba5b-5e8709909e28\") " pod="openshift-service-ca/service-ca-74545575db-kmdd7" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667232 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9deecbe-7b73-4e1a-8cca-ac79d53ae30f-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-qpqxs\" (UID: \"b9deecbe-7b73-4e1a-8cca-ac79d53ae30f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667246 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35b91b6a-32f7-4c13-a156-8b2b45f9e9d0-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-vzm4l\" (UID: \"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667264 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/995fc011-e41c-4695-ba5b-5e8709909e28-signing-cabundle\") pod \"service-ca-74545575db-kmdd7\" (UID: \"995fc011-e41c-4695-ba5b-5e8709909e28\") " pod="openshift-service-ca/service-ca-74545575db-kmdd7" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667281 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pdhbv\" (UniqueName: \"kubernetes.io/projected/7203b67e-ad3c-4af4-905c-eb6c92ceeed3-kube-api-access-pdhbv\") pod \"machine-config-operator-67c9d58cbb-xwx7k\" (UID: \"7203b67e-ad3c-4af4-905c-eb6c92ceeed3\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667296 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d4012bb8-5470-4545-9344-50a74df66572-tmp-dir\") pod \"dns-default-z5hzx\" (UID: \"d4012bb8-5470-4545-9344-50a74df66572\") " pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667312 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35b91b6a-32f7-4c13-a156-8b2b45f9e9d0-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-vzm4l\" (UID: \"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667327 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4012bb8-5470-4545-9344-50a74df66572-config-volume\") pod \"dns-default-z5hzx\" (UID: \"d4012bb8-5470-4545-9344-50a74df66572\") " pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667340 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g2kh4\" (UniqueName: \"kubernetes.io/projected/d4012bb8-5470-4545-9344-50a74df66572-kube-api-access-g2kh4\") pod \"dns-default-z5hzx\" (UID: \"d4012bb8-5470-4545-9344-50a74df66572\") " pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667360 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3227aa65-bab5-40ec-9da8-eeadf9187a30-serving-cert\") pod \"service-ca-operator-5b9c976747-xvpqj\" (UID: \"3227aa65-bab5-40ec-9da8-eeadf9187a30\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667375 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b57b87c8-9f03-469e-a427-29fc0b5ea61b-node-bootstrap-token\") pod \"machine-config-server-fttjc\" (UID: \"b57b87c8-9f03-469e-a427-29fc0b5ea61b\") " pod="openshift-machine-config-operator/machine-config-server-fttjc" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667394 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35b91b6a-32f7-4c13-a156-8b2b45f9e9d0-config\") pod \"openshift-kube-scheduler-operator-54f497555d-vzm4l\" (UID: \"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667421 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ktvn8\" (UniqueName: \"kubernetes.io/projected/009d3924-f028-4f36-9c85-df76d4ec0a70-kube-api-access-ktvn8\") pod \"multus-admission-controller-69db94689b-ws944\" (UID: \"009d3924-f028-4f36-9c85-df76d4ec0a70\") " pod="openshift-multus/multus-admission-controller-69db94689b-ws944" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667444 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gncnd\" (UniqueName: \"kubernetes.io/projected/02b6f45a-2d25-4712-b127-c1906f6fb154-kube-api-access-gncnd\") pod \"marketplace-operator-547dbd544d-v5t7z\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667467 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7203b67e-ad3c-4af4-905c-eb6c92ceeed3-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-xwx7k\" (UID: \"7203b67e-ad3c-4af4-905c-eb6c92ceeed3\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667507 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5mwb9\" (UniqueName: \"kubernetes.io/projected/a1258bd8-1206-44b0-8eba-2d2ed9e8dc42-kube-api-access-5mwb9\") pod \"package-server-manager-77f986bd66-2kppn\" (UID: \"a1258bd8-1206-44b0-8eba-2d2ed9e8dc42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667529 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-fmwqx\" (UID: \"04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667550 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rv94h\" (UniqueName: \"kubernetes.io/projected/995fc011-e41c-4695-ba5b-5e8709909e28-kube-api-access-rv94h\") pod \"service-ca-74545575db-kmdd7\" (UID: \"995fc011-e41c-4695-ba5b-5e8709909e28\") " pod="openshift-service-ca/service-ca-74545575db-kmdd7" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667573 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b57b87c8-9f03-469e-a427-29fc0b5ea61b-certs\") pod \"machine-config-server-fttjc\" (UID: \"b57b87c8-9f03-469e-a427-29fc0b5ea61b\") " pod="openshift-machine-config-operator/machine-config-server-fttjc" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667595 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dtp25\" (UniqueName: \"kubernetes.io/projected/0e51fda2-d38e-45fc-aa7a-14fe47e53037-kube-api-access-dtp25\") pod \"machine-approver-54c688565-zcf9f\" (UID: \"0e51fda2-d38e-45fc-aa7a-14fe47e53037\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667627 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gdf9f\" (UniqueName: \"kubernetes.io/projected/234e7e70-7bb6-457f-a170-f1349602c58a-kube-api-access-gdf9f\") pod \"cni-sysctl-allowlist-ds-6gxxt\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667654 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-whwgp\" (UniqueName: \"kubernetes.io/projected/a0286138-8763-49b9-b839-a6f8451a42df-kube-api-access-whwgp\") pod \"ingress-canary-hjh5k\" (UID: \"a0286138-8763-49b9-b839-a6f8451a42df\") " pod="openshift-ingress-canary/ingress-canary-hjh5k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667675 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7203b67e-ad3c-4af4-905c-eb6c92ceeed3-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-xwx7k\" (UID: \"7203b67e-ad3c-4af4-905c-eb6c92ceeed3\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667694 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd-tmpfs\") pod \"packageserver-7d4fc7d867-ms6jw\" (UID: \"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667709 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e51fda2-d38e-45fc-aa7a-14fe47e53037-config\") pod \"machine-approver-54c688565-zcf9f\" (UID: \"0e51fda2-d38e-45fc-aa7a-14fe47e53037\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667732 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/234e7e70-7bb6-457f-a170-f1349602c58a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-6gxxt\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667749 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/35b91b6a-32f7-4c13-a156-8b2b45f9e9d0-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-vzm4l\" (UID: \"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667768 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b9dl8\" (UniqueName: \"kubernetes.io/projected/136903e0-14bd-4e29-afb8-d552dc8eb9af-kube-api-access-b9dl8\") pod \"control-plane-machine-set-operator-75ffdb6fcd-mpfh5\" (UID: \"136903e0-14bd-4e29-afb8-d552dc8eb9af\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667785 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-mountpoint-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667819 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0286138-8763-49b9-b839-a6f8451a42df-cert\") pod \"ingress-canary-hjh5k\" (UID: \"a0286138-8763-49b9-b839-a6f8451a42df\") " pod="openshift-ingress-canary/ingress-canary-hjh5k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667833 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/65a17c30-dc44-43d4-8563-e5161462458c-stats-auth\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667857 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65a17c30-dc44-43d4-8563-e5161462458c-service-ca-bundle\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667873 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-864f9\" (UniqueName: \"kubernetes.io/projected/04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee-kube-api-access-864f9\") pod \"machine-config-controller-f9cdd68f7-fmwqx\" (UID: \"04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667893 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6h8hb\" (UniqueName: \"kubernetes.io/projected/b57b87c8-9f03-469e-a427-29fc0b5ea61b-kube-api-access-6h8hb\") pod \"machine-config-server-fttjc\" (UID: \"b57b87c8-9f03-469e-a427-29fc0b5ea61b\") " pod="openshift-machine-config-operator/machine-config-server-fttjc" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667912 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/009d3924-f028-4f36-9c85-df76d4ec0a70-webhook-certs\") pod \"multus-admission-controller-69db94689b-ws944\" (UID: \"009d3924-f028-4f36-9c85-df76d4ec0a70\") " pod="openshift-multus/multus-admission-controller-69db94689b-ws944" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667932 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-csi-data-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667946 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd-webhook-cert\") pod \"packageserver-7d4fc7d867-ms6jw\" (UID: \"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667968 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-registration-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.667987 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0e51fda2-d38e-45fc-aa7a-14fe47e53037-machine-approver-tls\") pod \"machine-approver-54c688565-zcf9f\" (UID: \"0e51fda2-d38e-45fc-aa7a-14fe47e53037\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.668004 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nxrkt\" (UniqueName: \"kubernetes.io/projected/3227aa65-bab5-40ec-9da8-eeadf9187a30-kube-api-access-nxrkt\") pod \"service-ca-operator-5b9c976747-xvpqj\" (UID: \"3227aa65-bab5-40ec-9da8-eeadf9187a30\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.668561 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d4012bb8-5470-4545-9344-50a74df66572-tmp-dir\") pod \"dns-default-z5hzx\" (UID: \"d4012bb8-5470-4545-9344-50a74df66572\") " pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:37 crc kubenswrapper[5112]: E1208 17:42:37.668689 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.1686626 +0000 UTC m=+135.178211301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.669936 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7203b67e-ad3c-4af4-905c-eb6c92ceeed3-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-xwx7k\" (UID: \"7203b67e-ad3c-4af4-905c-eb6c92ceeed3\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.670535 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35b91b6a-32f7-4c13-a156-8b2b45f9e9d0-config\") pod \"openshift-kube-scheduler-operator-54f497555d-vzm4l\" (UID: \"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.671106 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e51fda2-d38e-45fc-aa7a-14fe47e53037-auth-proxy-config\") pod \"machine-approver-54c688565-zcf9f\" (UID: \"0e51fda2-d38e-45fc-aa7a-14fe47e53037\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.672576 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-socket-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.672563 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-plugins-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.672657 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/234e7e70-7bb6-457f-a170-f1349602c58a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-6gxxt\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.672778 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-registration-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.672813 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-mountpoint-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.672895 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7203b67e-ad3c-4af4-905c-eb6c92ceeed3-images\") pod \"machine-config-operator-67c9d58cbb-xwx7k\" (UID: \"7203b67e-ad3c-4af4-905c-eb6c92ceeed3\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.672974 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.676467 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/35b91b6a-32f7-4c13-a156-8b2b45f9e9d0-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-vzm4l\" (UID: \"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.674282 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02b6f45a-2d25-4712-b127-c1906f6fb154-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-v5t7z\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.674515 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-fmwqx\" (UID: \"04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.674881 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3227aa65-bab5-40ec-9da8-eeadf9187a30-config\") pod \"service-ca-operator-5b9c976747-xvpqj\" (UID: \"3227aa65-bab5-40ec-9da8-eeadf9187a30\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.676201 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd-tmpfs\") pod \"packageserver-7d4fc7d867-ms6jw\" (UID: \"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.676232 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0e51fda2-d38e-45fc-aa7a-14fe47e53037-machine-approver-tls\") pod \"machine-approver-54c688565-zcf9f\" (UID: \"0e51fda2-d38e-45fc-aa7a-14fe47e53037\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.676861 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e51fda2-d38e-45fc-aa7a-14fe47e53037-config\") pod \"machine-approver-54c688565-zcf9f\" (UID: \"0e51fda2-d38e-45fc-aa7a-14fe47e53037\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.673760 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/234e7e70-7bb6-457f-a170-f1349602c58a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-6gxxt\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.677161 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7203b67e-ad3c-4af4-905c-eb6c92ceeed3-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-xwx7k\" (UID: \"7203b67e-ad3c-4af4-905c-eb6c92ceeed3\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.677372 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/009d3924-f028-4f36-9c85-df76d4ec0a70-webhook-certs\") pod \"multus-admission-controller-69db94689b-ws944\" (UID: \"009d3924-f028-4f36-9c85-df76d4ec0a70\") " pod="openshift-multus/multus-admission-controller-69db94689b-ws944" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.677842 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35b91b6a-32f7-4c13-a156-8b2b45f9e9d0-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-vzm4l\" (UID: \"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.677853 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ace9dd66-3bc5-4b64-afe3-4f05af28644c-csi-data-dir\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.678605 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/234e7e70-7bb6-457f-a170-f1349602c58a-ready\") pod \"cni-sysctl-allowlist-ds-6gxxt\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.679519 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/995fc011-e41c-4695-ba5b-5e8709909e28-signing-cabundle\") pod \"service-ca-74545575db-kmdd7\" (UID: \"995fc011-e41c-4695-ba5b-5e8709909e28\") " pod="openshift-service-ca/service-ca-74545575db-kmdd7" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.680359 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65a17c30-dc44-43d4-8563-e5161462458c-service-ca-bundle\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.680744 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02b6f45a-2d25-4712-b127-c1906f6fb154-tmp\") pod \"marketplace-operator-547dbd544d-v5t7z\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.681162 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9dd1e913-a30b-4f99-884f-db1d9526f7f5-tmpfs\") pod \"catalog-operator-75ff9f647d-j76sm\" (UID: \"9dd1e913-a30b-4f99-884f-db1d9526f7f5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.682968 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9deecbe-7b73-4e1a-8cca-ac79d53ae30f-config\") pod \"kube-storage-version-migrator-operator-565b79b866-qpqxs\" (UID: \"b9deecbe-7b73-4e1a-8cca-ac79d53ae30f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.683164 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/65a17c30-dc44-43d4-8563-e5161462458c-stats-auth\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.686878 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/65a17c30-dc44-43d4-8563-e5161462458c-default-certificate\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.687008 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd-apiservice-cert\") pod \"packageserver-7d4fc7d867-ms6jw\" (UID: \"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.690026 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd-webhook-cert\") pod \"packageserver-7d4fc7d867-ms6jw\" (UID: \"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.690141 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b57b87c8-9f03-469e-a427-29fc0b5ea61b-certs\") pod \"machine-config-server-fttjc\" (UID: \"b57b87c8-9f03-469e-a427-29fc0b5ea61b\") " pod="openshift-machine-config-operator/machine-config-server-fttjc" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.690267 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9dd1e913-a30b-4f99-884f-db1d9526f7f5-srv-cert\") pod \"catalog-operator-75ff9f647d-j76sm\" (UID: \"9dd1e913-a30b-4f99-884f-db1d9526f7f5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.690697 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4012bb8-5470-4545-9344-50a74df66572-config-volume\") pod \"dns-default-z5hzx\" (UID: \"d4012bb8-5470-4545-9344-50a74df66572\") " pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.690751 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d4012bb8-5470-4545-9344-50a74df66572-metrics-tls\") pod \"dns-default-z5hzx\" (UID: \"d4012bb8-5470-4545-9344-50a74df66572\") " pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.691176 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1258bd8-1206-44b0-8eba-2d2ed9e8dc42-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-2kppn\" (UID: \"a1258bd8-1206-44b0-8eba-2d2ed9e8dc42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.694134 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3227aa65-bab5-40ec-9da8-eeadf9187a30-serving-cert\") pod \"service-ca-operator-5b9c976747-xvpqj\" (UID: \"3227aa65-bab5-40ec-9da8-eeadf9187a30\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.694647 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-fmwqx\" (UID: \"04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.694734 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/65a17c30-dc44-43d4-8563-e5161462458c-metrics-certs\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.695200 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-7q49w" podStartSLOduration=115.695179123 podStartE2EDuration="1m55.695179123s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:37.695027319 +0000 UTC m=+134.704576020" watchObservedRunningTime="2025-12-08 17:42:37.695179123 +0000 UTC m=+134.704727824" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.696890 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b57b87c8-9f03-469e-a427-29fc0b5ea61b-node-bootstrap-token\") pod \"machine-config-server-fttjc\" (UID: \"b57b87c8-9f03-469e-a427-29fc0b5ea61b\") " pod="openshift-machine-config-operator/machine-config-server-fttjc" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.698139 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a0286138-8763-49b9-b839-a6f8451a42df-cert\") pod \"ingress-canary-hjh5k\" (UID: \"a0286138-8763-49b9-b839-a6f8451a42df\") " pod="openshift-ingress-canary/ingress-canary-hjh5k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.702565 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/02b6f45a-2d25-4712-b127-c1906f6fb154-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-v5t7z\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.703789 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/995fc011-e41c-4695-ba5b-5e8709909e28-signing-key\") pod \"service-ca-74545575db-kmdd7\" (UID: \"995fc011-e41c-4695-ba5b-5e8709909e28\") " pod="openshift-service-ca/service-ca-74545575db-kmdd7" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.706493 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/136903e0-14bd-4e29-afb8-d552dc8eb9af-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-mpfh5\" (UID: \"136903e0-14bd-4e29-afb8-d552dc8eb9af\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.709033 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9deecbe-7b73-4e1a-8cca-ac79d53ae30f-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-qpqxs\" (UID: \"b9deecbe-7b73-4e1a-8cca-ac79d53ae30f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.713737 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9dd1e913-a30b-4f99-884f-db1d9526f7f5-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-j76sm\" (UID: \"9dd1e913-a30b-4f99-884f-db1d9526f7f5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.726241 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxrkt\" (UniqueName: \"kubernetes.io/projected/3227aa65-bab5-40ec-9da8-eeadf9187a30-kube-api-access-nxrkt\") pod \"service-ca-operator-5b9c976747-xvpqj\" (UID: \"3227aa65-bab5-40ec-9da8-eeadf9187a30\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.754936 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mwb9\" (UniqueName: \"kubernetes.io/projected/a1258bd8-1206-44b0-8eba-2d2ed9e8dc42-kube-api-access-5mwb9\") pod \"package-server-manager-77f986bd66-2kppn\" (UID: \"a1258bd8-1206-44b0-8eba-2d2ed9e8dc42\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.773214 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58"] Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.774391 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: E1208 17:42:37.774889 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.274873157 +0000 UTC m=+135.284421858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.783416 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktvn8\" (UniqueName: \"kubernetes.io/projected/009d3924-f028-4f36-9c85-df76d4ec0a70-kube-api-access-ktvn8\") pod \"multus-admission-controller-69db94689b-ws944\" (UID: \"009d3924-f028-4f36-9c85-df76d4ec0a70\") " pod="openshift-multus/multus-admission-controller-69db94689b-ws944" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.788841 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.791515 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h8hb\" (UniqueName: \"kubernetes.io/projected/b57b87c8-9f03-469e-a427-29fc0b5ea61b-kube-api-access-6h8hb\") pod \"machine-config-server-fttjc\" (UID: \"b57b87c8-9f03-469e-a427-29fc0b5ea61b\") " pod="openshift-machine-config-operator/machine-config-server-fttjc" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.842799 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xbl2\" (UniqueName: \"kubernetes.io/projected/086a07f6-6e0f-4332-8724-d29c680a0ae5-kube-api-access-6xbl2\") pod \"migrator-866fcbc849-4lrgt\" (UID: \"086a07f6-6e0f-4332-8724-d29c680a0ae5\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-4lrgt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.843103 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv94h\" (UniqueName: \"kubernetes.io/projected/995fc011-e41c-4695-ba5b-5e8709909e28-kube-api-access-rv94h\") pod \"service-ca-74545575db-kmdd7\" (UID: \"995fc011-e41c-4695-ba5b-5e8709909e28\") " pod="openshift-service-ca/service-ca-74545575db-kmdd7" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.846002 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-864f9\" (UniqueName: \"kubernetes.io/projected/04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee-kube-api-access-864f9\") pod \"machine-config-controller-f9cdd68f7-fmwqx\" (UID: \"04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.862932 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc48g\" (UniqueName: \"kubernetes.io/projected/65a17c30-dc44-43d4-8563-e5161462458c-kube-api-access-kc48g\") pod \"router-default-68cf44c8b8-p9hpg\" (UID: \"65a17c30-dc44-43d4-8563-e5161462458c\") " pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.865763 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fttjc" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.875910 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:37 crc kubenswrapper[5112]: E1208 17:42:37.876359 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.376341267 +0000 UTC m=+135.385889958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.882982 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdf9f\" (UniqueName: \"kubernetes.io/projected/234e7e70-7bb6-457f-a170-f1349602c58a-kube-api-access-gdf9f\") pod \"cni-sysctl-allowlist-ds-6gxxt\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.889295 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b"] Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.897785 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.899267 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2kh4\" (UniqueName: \"kubernetes.io/projected/d4012bb8-5470-4545-9344-50a74df66572-kube-api-access-g2kh4\") pod \"dns-default-z5hzx\" (UID: \"d4012bb8-5470-4545-9344-50a74df66572\") " pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.923500 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-whwgp\" (UniqueName: \"kubernetes.io/projected/a0286138-8763-49b9-b839-a6f8451a42df-kube-api-access-whwgp\") pod \"ingress-canary-hjh5k\" (UID: \"a0286138-8763-49b9-b839-a6f8451a42df\") " pod="openshift-ingress-canary/ingress-canary-hjh5k" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.945992 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gncnd\" (UniqueName: \"kubernetes.io/projected/02b6f45a-2d25-4712-b127-c1906f6fb154-kube-api-access-gncnd\") pod \"marketplace-operator-547dbd544d-v5t7z\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.974131 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4"] Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.975723 5112 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-p8dgq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.975785 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" podUID="7e0c9c4f-1216-499b-a1dd-be2f225cb97f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.977638 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:37 crc kubenswrapper[5112]: E1208 17:42:37.978113 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.478096675 +0000 UTC m=+135.487645376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.979031 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9dl8\" (UniqueName: \"kubernetes.io/projected/136903e0-14bd-4e29-afb8-d552dc8eb9af-kube-api-access-b9dl8\") pod \"control-plane-machine-set-operator-75ffdb6fcd-mpfh5\" (UID: \"136903e0-14bd-4e29-afb8-d552dc8eb9af\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5" Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.985806 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7"] Dec 08 17:42:37 crc kubenswrapper[5112]: I1208 17:42:37.990523 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtp25\" (UniqueName: \"kubernetes.io/projected/0e51fda2-d38e-45fc-aa7a-14fe47e53037-kube-api-access-dtp25\") pod \"machine-approver-54c688565-zcf9f\" (UID: \"0e51fda2-d38e-45fc-aa7a-14fe47e53037\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.013774 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdhbv\" (UniqueName: \"kubernetes.io/projected/7203b67e-ad3c-4af4-905c-eb6c92ceeed3-kube-api-access-pdhbv\") pod \"machine-config-operator-67c9d58cbb-xwx7k\" (UID: \"7203b67e-ad3c-4af4-905c-eb6c92ceeed3\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.015932 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.017383 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-kmdd7" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.024527 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.032681 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-4lrgt" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.034355 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" event={"ID":"5543e4e6-3fcd-4469-961d-5e3ed283f0dd","Type":"ContainerStarted","Data":"429dcbfe2b2f3e1b12a921ff5d308d69bcf2829a2f1108612be68da6c2421000"} Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.036240 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dtj8\" (UniqueName: \"kubernetes.io/projected/ace9dd66-3bc5-4b64-afe3-4f05af28644c-kube-api-access-7dtj8\") pod \"csi-hostpathplugin-p4h9p\" (UID: \"ace9dd66-3bc5-4b64-afe3-4f05af28644c\") " pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.036700 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" event={"ID":"500cfd87-2e0f-4321-a7d5-f19d851aafc9","Type":"ContainerStarted","Data":"12d3332d5bd344c58a98718701a63471e65314738026d18cf4f056b22d86f121"} Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.045961 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsl7t\" (UniqueName: \"kubernetes.io/projected/cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd-kube-api-access-hsl7t\") pod \"packageserver-7d4fc7d867-ms6jw\" (UID: \"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.046424 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" event={"ID":"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138","Type":"ContainerStarted","Data":"c616d424196f733b7c3e83ca8563b0866368e5a6a57a2bd00178e8e0ea47dc9c"} Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.065817 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.066677 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5" Dec 08 17:42:38 crc kubenswrapper[5112]: W1208 17:42:38.068925 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fcce943_40d4_4ee8_aabb_7754a1bde5bc.slice/crio-24fcf5d3b24f1773d26073bf69e1cf3ded99c8e7c3621d88d007031b041cbe6d WatchSource:0}: Error finding container 24fcf5d3b24f1773d26073bf69e1cf3ded99c8e7c3621d88d007031b041cbe6d: Status 404 returned error can't find the container with id 24fcf5d3b24f1773d26073bf69e1cf3ded99c8e7c3621d88d007031b041cbe6d Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.070289 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-ws944" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.072207 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" event={"ID":"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7","Type":"ContainerStarted","Data":"9fb0f78f84b9e85160ba3553a0a7b52fef7a57eff4fc31c49a9fdcaf47308bad"} Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.072556 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-hc5xj"] Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.074924 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" event={"ID":"19fcd464-915f-4883-8da8-c4dffba0bbbd","Type":"ContainerStarted","Data":"19c74a12f859a7d67ee3688d393e81056917f6f489a7d4380f0fad622a376f84"} Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.079814 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:38 crc kubenswrapper[5112]: E1208 17:42:38.080149 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.58011989 +0000 UTC m=+135.589668601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.080422 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" event={"ID":"98f49f4b-546f-43bb-bfa3-c6966837ab7c","Type":"ContainerStarted","Data":"cee9773ba293dab1420182f15fe3b7950cb5bd4545747c0c0b6bf3b26c6e7519"} Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.081103 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35b91b6a-32f7-4c13-a156-8b2b45f9e9d0-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-vzm4l\" (UID: \"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.079836 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.082742 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:38 crc kubenswrapper[5112]: E1208 17:42:38.083178 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.583163882 +0000 UTC m=+135.592712583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.084210 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" event={"ID":"15777a2f-256b-4501-9856-749819a161a9","Type":"ContainerStarted","Data":"92bfd7ed1663bfc97c038c97d29362b6475f5df29b7e1c6dd434b21e56c0b514"} Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.088531 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr86k\" (UniqueName: \"kubernetes.io/projected/9dd1e913-a30b-4f99-884f-db1d9526f7f5-kube-api-access-cr86k\") pod \"catalog-operator-75ff9f647d-j76sm\" (UID: \"9dd1e913-a30b-4f99-884f-db1d9526f7f5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.089320 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" event={"ID":"210c8180-5efd-403d-bc10-32004b40c0dc","Type":"ContainerStarted","Data":"7c7d40ce547f8c84b64d1883b9249e706b4bf5b68a4785460f207c1e60f49d26"} Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.093073 5112 patch_prober.go:28] interesting pod/downloads-747b44746d-gv282 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.093156 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-gv282" podUID="3b27b80a-df1a-4a29-82d6-384db5b6612e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.099095 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.105569 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dt8g\" (UniqueName: \"kubernetes.io/projected/b9deecbe-7b73-4e1a-8cca-ac79d53ae30f-kube-api-access-5dt8g\") pod \"kube-storage-version-migrator-operator-565b79b866-qpqxs\" (UID: \"b9deecbe-7b73-4e1a-8cca-ac79d53ae30f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.105753 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.113470 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.123773 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.136427 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.146683 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.155450 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.183015 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hjh5k" Dec 08 17:42:38 crc kubenswrapper[5112]: E1208 17:42:38.185772 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.685752413 +0000 UTC m=+135.695301114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.185142 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.186613 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:38 crc kubenswrapper[5112]: E1208 17:42:38.191183 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.691160108 +0000 UTC m=+135.700708809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.275234 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" podStartSLOduration=116.275197249 podStartE2EDuration="1m56.275197249s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:38.261793099 +0000 UTC m=+135.271341800" watchObservedRunningTime="2025-12-08 17:42:38.275197249 +0000 UTC m=+135.284745950" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.283123 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.311428 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:38 crc kubenswrapper[5112]: E1208 17:42:38.312330 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.812275837 +0000 UTC m=+135.821824538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.399269 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.414549 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:38 crc kubenswrapper[5112]: E1208 17:42:38.414985 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.91497322 +0000 UTC m=+135.924521921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.421248 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw"] Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.516065 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:38 crc kubenswrapper[5112]: E1208 17:42:38.516945 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.016911533 +0000 UTC m=+136.026460234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5112]: W1208 17:42:38.534687 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e51fda2_d38e_45fc_aa7a_14fe47e53037.slice/crio-8b735cabcf45c139bfff3cda188ca06ed2ec6e52eb1978beee4949a4767f9b5a WatchSource:0}: Error finding container 8b735cabcf45c139bfff3cda188ca06ed2ec6e52eb1978beee4949a4767f9b5a: Status 404 returned error can't find the container with id 8b735cabcf45c139bfff3cda188ca06ed2ec6e52eb1978beee4949a4767f9b5a Dec 08 17:42:38 crc kubenswrapper[5112]: W1208 17:42:38.552430 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod234e7e70_7bb6_457f_a170_f1349602c58a.slice/crio-df7b9197f4e5181b94ea6e3513730df2dcd58c18ce67ca21bf2302357e969ac7 WatchSource:0}: Error finding container df7b9197f4e5181b94ea6e3513730df2dcd58c18ce67ca21bf2302357e969ac7: Status 404 returned error can't find the container with id df7b9197f4e5181b94ea6e3513730df2dcd58c18ce67ca21bf2302357e969ac7 Dec 08 17:42:38 crc kubenswrapper[5112]: W1208 17:42:38.580940 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb57b87c8_9f03_469e_a427_29fc0b5ea61b.slice/crio-f22759ce768fa724ccc640bea42b1aeba16a18a936ebdc43e15b059beec4ca49 WatchSource:0}: Error finding container f22759ce768fa724ccc640bea42b1aeba16a18a936ebdc43e15b059beec4ca49: Status 404 returned error can't find the container with id f22759ce768fa724ccc640bea42b1aeba16a18a936ebdc43e15b059beec4ca49 Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.619613 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:38 crc kubenswrapper[5112]: E1208 17:42:38.620019 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.120000016 +0000 UTC m=+136.129548717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.627504 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-9rvxw" podStartSLOduration=116.627469027 podStartE2EDuration="1m56.627469027s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:38.575705985 +0000 UTC m=+135.585254686" watchObservedRunningTime="2025-12-08 17:42:38.627469027 +0000 UTC m=+135.637017728" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.631254 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj"] Dec 08 17:42:38 crc kubenswrapper[5112]: E1208 17:42:38.725874 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.225844674 +0000 UTC m=+136.235393375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.728344 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.728806 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:38 crc kubenswrapper[5112]: E1208 17:42:38.730402 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.230384577 +0000 UTC m=+136.239933278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.820856 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-gv282" podStartSLOduration=116.8208422 podStartE2EDuration="1m56.8208422s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:38.820056489 +0000 UTC m=+135.829605190" watchObservedRunningTime="2025-12-08 17:42:38.8208422 +0000 UTC m=+135.830390901" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.832960 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:38 crc kubenswrapper[5112]: E1208 17:42:38.833192 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.333168192 +0000 UTC m=+136.342716893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.833684 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:38 crc kubenswrapper[5112]: E1208 17:42:38.833957 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.333951123 +0000 UTC m=+136.343499814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.891353 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-74dth" podStartSLOduration=116.891334867 podStartE2EDuration="1m56.891334867s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:38.860931779 +0000 UTC m=+135.870480480" watchObservedRunningTime="2025-12-08 17:42:38.891334867 +0000 UTC m=+135.900883568" Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.938696 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:38 crc kubenswrapper[5112]: E1208 17:42:38.939182 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.439166234 +0000 UTC m=+136.448714935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5112]: I1208 17:42:38.957180 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l"] Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.041336 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:39 crc kubenswrapper[5112]: E1208 17:42:39.042133 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.542061693 +0000 UTC m=+136.551610394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.103244 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-f7bpv" podStartSLOduration=117.103227878 podStartE2EDuration="1m57.103227878s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:39.101463171 +0000 UTC m=+136.111011872" watchObservedRunningTime="2025-12-08 17:42:39.103227878 +0000 UTC m=+136.112776579" Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.111923 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" event={"ID":"234e7e70-7bb6-457f-a170-f1349602c58a","Type":"ContainerStarted","Data":"df7b9197f4e5181b94ea6e3513730df2dcd58c18ce67ca21bf2302357e969ac7"} Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.134760 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" event={"ID":"3227aa65-bab5-40ec-9da8-eeadf9187a30","Type":"ContainerStarted","Data":"a9b1d3794ef4882cea6c56da4eceabe7d669be4ce005756fb49626ec0e9094d0"} Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.162703 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:39 crc kubenswrapper[5112]: E1208 17:42:39.162883 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.662849713 +0000 UTC m=+136.672398414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.177468 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:39 crc kubenswrapper[5112]: E1208 17:42:39.178044 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.678021151 +0000 UTC m=+136.687569852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.186564 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rc5qq" podStartSLOduration=117.18654447 podStartE2EDuration="1m57.18654447s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:39.142647019 +0000 UTC m=+136.152195720" watchObservedRunningTime="2025-12-08 17:42:39.18654447 +0000 UTC m=+136.196093171" Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.204755 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" event={"ID":"ec53c21b-b648-4496-882b-64dbb3f54c68","Type":"ContainerStarted","Data":"bb5177473656c5da4c7fea7adaaceea43d8026e017d5a32e024479a9f8f6d94a"} Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.231145 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" event={"ID":"5543e4e6-3fcd-4469-961d-5e3ed283f0dd","Type":"ContainerStarted","Data":"064b75d34fe3247080a572ffe424cd98960d7e6cb103fef7f9724bd890609d21"} Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.257468 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" event={"ID":"0e51fda2-d38e-45fc-aa7a-14fe47e53037","Type":"ContainerStarted","Data":"8b735cabcf45c139bfff3cda188ca06ed2ec6e52eb1978beee4949a4767f9b5a"} Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.288014 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:39 crc kubenswrapper[5112]: E1208 17:42:39.289065 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.789035768 +0000 UTC m=+136.798584459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.291293 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:39 crc kubenswrapper[5112]: E1208 17:42:39.292294 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.792056339 +0000 UTC m=+136.801605110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.292606 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-4lrgt"] Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.382157 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" event={"ID":"500cfd87-2e0f-4321-a7d5-f19d851aafc9","Type":"ContainerStarted","Data":"e7bf9e999d2c6a3b9a034c09cab4e2f9352725682bb33aa24561bb13cfa3bfc9"} Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.382208 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" event={"ID":"bb7829a6-bbd3-49f8-8dc2-8a605fe4b138","Type":"ContainerStarted","Data":"618cf5e8e5cf40453fcce22e4767675ae48961fd8c3a4d6c454ebd44386906e4"} Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.392866 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:39 crc kubenswrapper[5112]: E1208 17:42:39.394639 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.894615048 +0000 UTC m=+136.904163749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.447449 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" podStartSLOduration=117.4474354 podStartE2EDuration="1m57.4474354s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:39.446810353 +0000 UTC m=+136.456359064" watchObservedRunningTime="2025-12-08 17:42:39.4474354 +0000 UTC m=+136.456984101" Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.453516 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx"] Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.461357 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" event={"ID":"9b0063af-3ff2-4e04-81f7-56971d792d20","Type":"ContainerStarted","Data":"3aa8d1e04abd20acf93a346fb0686c6e2cb55ec25e442115331496602db72a79"} Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.467741 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-kmdd7"] Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.470333 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fttjc" event={"ID":"b57b87c8-9f03-469e-a427-29fc0b5ea61b","Type":"ContainerStarted","Data":"f22759ce768fa724ccc640bea42b1aeba16a18a936ebdc43e15b059beec4ca49"} Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.497239 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:39 crc kubenswrapper[5112]: E1208 17:42:39.498797 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.998776261 +0000 UTC m=+137.008324962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.507219 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p4h9p"] Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.549598 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" event={"ID":"98f49f4b-546f-43bb-bfa3-c6966837ab7c","Type":"ContainerStarted","Data":"8f8b2ba85401fc52567343aa5092106eed0074612d3320231b3442563e7595f1"} Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.555464 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" event={"ID":"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d","Type":"ContainerStarted","Data":"ca8cd9386910403eb637227adc5c73a30ffe5ecd423e29cc126a6458722abc1c"} Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.591191 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-sshdm" podStartSLOduration=117.591171547 podStartE2EDuration="1m57.591171547s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:39.588210907 +0000 UTC m=+136.597759618" watchObservedRunningTime="2025-12-08 17:42:39.591171547 +0000 UTC m=+136.600720258" Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.619889 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:39 crc kubenswrapper[5112]: E1208 17:42:39.621258 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.121235416 +0000 UTC m=+137.130784117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.642850 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" event={"ID":"3fcce943-40d4-4ee8-aabb-7754a1bde5bc","Type":"ContainerStarted","Data":"24fcf5d3b24f1773d26073bf69e1cf3ded99c8e7c3621d88d007031b041cbe6d"} Dec 08 17:42:39 crc kubenswrapper[5112]: W1208 17:42:39.647678 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod086a07f6_6e0f_4332_8724_d29c680a0ae5.slice/crio-90ef1e3500db070a7e5ffeae8f50bb67cc05ae35d7d92d9ab33bd485906335b0 WatchSource:0}: Error finding container 90ef1e3500db070a7e5ffeae8f50bb67cc05ae35d7d92d9ab33bd485906335b0: Status 404 returned error can't find the container with id 90ef1e3500db070a7e5ffeae8f50bb67cc05ae35d7d92d9ab33bd485906335b0 Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.653951 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" event={"ID":"65a17c30-dc44-43d4-8563-e5161462458c","Type":"ContainerStarted","Data":"e42b6d3a0b2c1d64bb76bd0098757e0d7336d0aa182854801e59d89b57fca079"} Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.657105 5112 patch_prober.go:28] interesting pod/downloads-747b44746d-gv282 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.657160 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-gv282" podUID="3b27b80a-df1a-4a29-82d6-384db5b6612e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.689806 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm"] Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.695322 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-v5t7z"] Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.711568 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k"] Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.722425 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:39 crc kubenswrapper[5112]: E1208 17:42:39.722806 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.222790108 +0000 UTC m=+137.232338809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.742909 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-m2rqt" podStartSLOduration=117.742889429 podStartE2EDuration="1m57.742889429s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:39.739984771 +0000 UTC m=+136.749533482" watchObservedRunningTime="2025-12-08 17:42:39.742889429 +0000 UTC m=+136.752438140" Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.786315 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-ws944"] Dec 08 17:42:39 crc kubenswrapper[5112]: W1208 17:42:39.802687 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podace9dd66_3bc5_4b64_afe3_4f05af28644c.slice/crio-b8dafcd3f3b5e4f2dab14eac74864169070a5f8e44fed08ed94d9f7cb63c44c0 WatchSource:0}: Error finding container b8dafcd3f3b5e4f2dab14eac74864169070a5f8e44fed08ed94d9f7cb63c44c0: Status 404 returned error can't find the container with id b8dafcd3f3b5e4f2dab14eac74864169070a5f8e44fed08ed94d9f7cb63c44c0 Dec 08 17:42:39 crc kubenswrapper[5112]: W1208 17:42:39.820587 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02b6f45a_2d25_4712_b127_c1906f6fb154.slice/crio-2d795179d45ae0d614c95b9d280a479b2a99483901fc8f0a47ad6107a10cc79a WatchSource:0}: Error finding container 2d795179d45ae0d614c95b9d280a479b2a99483901fc8f0a47ad6107a10cc79a: Status 404 returned error can't find the container with id 2d795179d45ae0d614c95b9d280a479b2a99483901fc8f0a47ad6107a10cc79a Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.823769 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:39 crc kubenswrapper[5112]: E1208 17:42:39.824018 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.323995521 +0000 UTC m=+137.333544222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.824454 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:39 crc kubenswrapper[5112]: E1208 17:42:39.830405 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.324893306 +0000 UTC m=+137.334442017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.858384 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn"] Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.898121 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5"] Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.915290 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z5hzx"] Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.921823 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qkblt" podStartSLOduration=117.921803673 podStartE2EDuration="1m57.921803673s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:39.919542372 +0000 UTC m=+136.929091083" watchObservedRunningTime="2025-12-08 17:42:39.921803673 +0000 UTC m=+136.931352374" Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.931269 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:39 crc kubenswrapper[5112]: E1208 17:42:39.946400 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.446161519 +0000 UTC m=+137.455710220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.946836 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw"] Dec 08 17:42:39 crc kubenswrapper[5112]: I1208 17:42:39.983221 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" podStartSLOduration=117.983189255 podStartE2EDuration="1m57.983189255s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:39.981047647 +0000 UTC m=+136.990596348" watchObservedRunningTime="2025-12-08 17:42:39.983189255 +0000 UTC m=+136.992737956" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.000393 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hjh5k"] Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.017671 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs"] Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.018303 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-h7sbx" podStartSLOduration=118.018292759 podStartE2EDuration="1m58.018292759s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:40.013017087 +0000 UTC m=+137.022565788" watchObservedRunningTime="2025-12-08 17:42:40.018292759 +0000 UTC m=+137.027841460" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.037533 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:40 crc kubenswrapper[5112]: E1208 17:42:40.037945 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.537921488 +0000 UTC m=+137.547470189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.076142 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-82k2c" podStartSLOduration=118.076106825 podStartE2EDuration="1m58.076106825s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:40.059638792 +0000 UTC m=+137.069187493" watchObservedRunningTime="2025-12-08 17:42:40.076106825 +0000 UTC m=+137.085655526" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.140644 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:40 crc kubenswrapper[5112]: E1208 17:42:40.140876 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.640837337 +0000 UTC m=+137.650386038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.141293 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:40 crc kubenswrapper[5112]: E1208 17:42:40.142131 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.642117471 +0000 UTC m=+137.651666172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.242094 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:40 crc kubenswrapper[5112]: E1208 17:42:40.242203 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.742172243 +0000 UTC m=+137.751720944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.242730 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:40 crc kubenswrapper[5112]: E1208 17:42:40.243176 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.74315825 +0000 UTC m=+137.752706951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.346399 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:40 crc kubenswrapper[5112]: E1208 17:42:40.346630 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.846590753 +0000 UTC m=+137.856139464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.347603 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:40 crc kubenswrapper[5112]: E1208 17:42:40.348281 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.848261378 +0000 UTC m=+137.857810079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.360353 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.360425 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.379227 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.449095 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:40 crc kubenswrapper[5112]: E1208 17:42:40.449721 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.949702647 +0000 UTC m=+137.959251348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.556002 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:40 crc kubenswrapper[5112]: E1208 17:42:40.556743 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.056713166 +0000 UTC m=+138.066262047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.657025 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:40 crc kubenswrapper[5112]: E1208 17:42:40.657672 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.157649962 +0000 UTC m=+138.167198663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.685369 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" event={"ID":"19fcd464-915f-4883-8da8-c4dffba0bbbd","Type":"ContainerStarted","Data":"73336d467fb080e548b19e84a1b7e29c7b31f4c0bb0e3e53b7d907f6945b40c6"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.707702 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" event={"ID":"3fcce943-40d4-4ee8-aabb-7754a1bde5bc","Type":"ContainerStarted","Data":"4f8ca8e8e38c8497c024962173ae90529e09855807c66423760a0612f11594a9"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.718844 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" podStartSLOduration=118.718822358 podStartE2EDuration="1m58.718822358s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:40.717714168 +0000 UTC m=+137.727262859" watchObservedRunningTime="2025-12-08 17:42:40.718822358 +0000 UTC m=+137.728371059" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.750343 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-fpwm7" podStartSLOduration=118.750320375 podStartE2EDuration="1m58.750320375s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:40.748469216 +0000 UTC m=+137.758017917" watchObservedRunningTime="2025-12-08 17:42:40.750320375 +0000 UTC m=+137.759869076" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.754563 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" event={"ID":"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0","Type":"ContainerStarted","Data":"38e8d5791064d645c1e23b43140d743167616c3421d243788e77024edb136bab"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.754625 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" event={"ID":"35b91b6a-32f7-4c13-a156-8b2b45f9e9d0","Type":"ContainerStarted","Data":"f7ae7fd704b9eab489cd360bbbde7f3315220eb741612427ae15510df7647b7e"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.758730 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:40 crc kubenswrapper[5112]: E1208 17:42:40.761813 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.261788934 +0000 UTC m=+138.271337635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.762336 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-4lrgt" event={"ID":"086a07f6-6e0f-4332-8724-d29c680a0ae5","Type":"ContainerStarted","Data":"f96c78f1d14be30a2afc8bbad1e9cd50f7ab8a3840008c251c5133c720d3225d"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.762385 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-4lrgt" event={"ID":"086a07f6-6e0f-4332-8724-d29c680a0ae5","Type":"ContainerStarted","Data":"90ef1e3500db070a7e5ffeae8f50bb67cc05ae35d7d92d9ab33bd485906335b0"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.765054 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" event={"ID":"b9deecbe-7b73-4e1a-8cca-ac79d53ae30f","Type":"ContainerStarted","Data":"505a18a2428dd47f331e209181fd98e33159eec75a92f3a25681849534c91b41"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.769607 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-ws944" event={"ID":"009d3924-f028-4f36-9c85-df76d4ec0a70","Type":"ContainerStarted","Data":"85f6d2aaddb9668a154cc0157be0bc31ded2c8f8de3692df88f0334ceeace29f"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.773032 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" event={"ID":"234e7e70-7bb6-457f-a170-f1349602c58a","Type":"ContainerStarted","Data":"6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.774214 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.782661 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-vzm4l" podStartSLOduration=118.782642175 podStartE2EDuration="1m58.782642175s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:40.779426659 +0000 UTC m=+137.788975370" watchObservedRunningTime="2025-12-08 17:42:40.782642175 +0000 UTC m=+137.792190876" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.786888 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hjh5k" event={"ID":"a0286138-8763-49b9-b839-a6f8451a42df","Type":"ContainerStarted","Data":"8d70f448febf8eb26e6e0e74e02522d06f3da3ddb1eb83816162e4df0068ba54"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.789741 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" event={"ID":"0e51fda2-d38e-45fc-aa7a-14fe47e53037","Type":"ContainerStarted","Data":"01e72cbfbac127b3402205cb65fba9289fbbba90baddfe819ca3a286d56ef13e"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.796637 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" event={"ID":"9dd1e913-a30b-4f99-884f-db1d9526f7f5","Type":"ContainerStarted","Data":"b58f2849ec4d19659a7f4ed11d2a8ab9e40d491bdbb25972c3f38c8bedf1c345"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.797374 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.798691 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" event={"ID":"a1258bd8-1206-44b0-8eba-2d2ed9e8dc42","Type":"ContainerStarted","Data":"6afc7446d38fc740cef4902c3c4de76f42a97ef32887ed1358dcbc8744a88737"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.809191 5112 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-j76sm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.809274 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" podUID="9dd1e913-a30b-4f99-884f-db1d9526f7f5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.815060 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-kmdd7" event={"ID":"995fc011-e41c-4695-ba5b-5e8709909e28","Type":"ContainerStarted","Data":"1bb9386ba24127fe42f6d9b34ba622b25ff467362e9432e3e6e5b04fd2956bae"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.815127 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-kmdd7" event={"ID":"995fc011-e41c-4695-ba5b-5e8709909e28","Type":"ContainerStarted","Data":"b277810e040f789fb5852662b8749487416534adbc415cd6d3e8e96446f0dd2a"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.830583 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" event={"ID":"04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee","Type":"ContainerStarted","Data":"b756474a6ad9f94412c841db272ae6d8f8e85105493bd269bfb793349e51f339"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.832785 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z5hzx" event={"ID":"d4012bb8-5470-4545-9344-50a74df66572","Type":"ContainerStarted","Data":"db2a15c9fe029c5903b6c4f214a3b35e2f5a241c5779347eee32564d1e744121"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.833987 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5" event={"ID":"136903e0-14bd-4e29-afb8-d552dc8eb9af","Type":"ContainerStarted","Data":"f103d30d31c19a487853e5cf64b1fbe226c1d7f19e33749430ac5656f2a62619"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.875695 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:40 crc kubenswrapper[5112]: E1208 17:42:40.877140 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.377117137 +0000 UTC m=+138.386665838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.882245 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" podStartSLOduration=6.882215214 podStartE2EDuration="6.882215214s" podCreationTimestamp="2025-12-08 17:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:40.880748715 +0000 UTC m=+137.890297416" watchObservedRunningTime="2025-12-08 17:42:40.882215214 +0000 UTC m=+137.891763915" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.931376 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" event={"ID":"02b6f45a-2d25-4712-b127-c1906f6fb154","Type":"ContainerStarted","Data":"2d795179d45ae0d614c95b9d280a479b2a99483901fc8f0a47ad6107a10cc79a"} Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.973971 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" podStartSLOduration=118.973954133 podStartE2EDuration="1m58.973954133s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:40.973040328 +0000 UTC m=+137.982589029" watchObservedRunningTime="2025-12-08 17:42:40.973954133 +0000 UTC m=+137.983502844" Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.978192 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:40 crc kubenswrapper[5112]: E1208 17:42:40.978770 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.478753562 +0000 UTC m=+138.488302253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5112]: I1208 17:42:40.998964 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" event={"ID":"34f5e653-2a78-42fa-ae6e-776dcc6fb3a7","Type":"ContainerStarted","Data":"42ef34a811ee7129597c24d7ea15af4a2457adb18b43e153bd7610c54abb800e"} Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.000066 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.021561 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" event={"ID":"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd","Type":"ContainerStarted","Data":"f3051d561d696bbdeba772828911f4bd603446290d31182095c33ffeb95e11f6"} Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.033045 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fttjc" event={"ID":"b57b87c8-9f03-469e-a427-29fc0b5ea61b","Type":"ContainerStarted","Data":"65f6d19bf6c8a837e2bf945eabc1e405dc55d1f91ccd08f5f578f6d2a6477878"} Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.033099 5112 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-nrr58 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.033178 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" podUID="34f5e653-2a78-42fa-ae6e-776dcc6fb3a7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.035788 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" event={"ID":"ace9dd66-3bc5-4b64-afe3-4f05af28644c","Type":"ContainerStarted","Data":"b8dafcd3f3b5e4f2dab14eac74864169070a5f8e44fed08ed94d9f7cb63c44c0"} Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.057245 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-kmdd7" podStartSLOduration=119.057219023 podStartE2EDuration="1m59.057219023s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:41.050841621 +0000 UTC m=+138.060390322" watchObservedRunningTime="2025-12-08 17:42:41.057219023 +0000 UTC m=+138.066767714" Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.070002 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" event={"ID":"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d","Type":"ContainerStarted","Data":"0f24d0cf112e8a8f77dfe5daf2ba6e5c33eb95ade22eb6dce41cd6305f5afdc0"} Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.075205 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" podStartSLOduration=119.075186756 podStartE2EDuration="1m59.075186756s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:41.074531909 +0000 UTC m=+138.084080610" watchObservedRunningTime="2025-12-08 17:42:41.075186756 +0000 UTC m=+138.084735457" Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.097331 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.098393 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.598371189 +0000 UTC m=+138.607919890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.110312 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fttjc" podStartSLOduration=7.11028768 podStartE2EDuration="7.11028768s" podCreationTimestamp="2025-12-08 17:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:41.107050233 +0000 UTC m=+138.116598934" watchObservedRunningTime="2025-12-08 17:42:41.11028768 +0000 UTC m=+138.119836401" Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.131069 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" event={"ID":"15777a2f-256b-4501-9856-749819a161a9","Type":"ContainerStarted","Data":"4a6deec5f8c35984055f2ab49b4a9d0045e47411d5f176b48bd8c482423e4b2d"} Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.164629 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" podStartSLOduration=119.164609151 podStartE2EDuration="1m59.164609151s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:41.154583142 +0000 UTC m=+138.164131843" watchObservedRunningTime="2025-12-08 17:42:41.164609151 +0000 UTC m=+138.174157852" Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.165549 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" event={"ID":"65a17c30-dc44-43d4-8563-e5161462458c","Type":"ContainerStarted","Data":"7334a9623f63c00e7b1f5cca66737c6b7d77f46148c2511b898eed0af48685c0"} Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.203938 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.204979 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.704965447 +0000 UTC m=+138.714514148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.210194 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" event={"ID":"3227aa65-bab5-40ec-9da8-eeadf9187a30","Type":"ContainerStarted","Data":"70fd05d66f0be4e8578e6ae97132133499ad8c35e42d3c2ee57f737eddcb2a57"} Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.239306 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" event={"ID":"ec53c21b-b648-4496-882b-64dbb3f54c68","Type":"ContainerStarted","Data":"885d0a4e9aad714c085034b77a607d287c406ad46dff1553f01d535da8c30552"} Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.240174 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" podStartSLOduration=119.240156884 podStartE2EDuration="1m59.240156884s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:41.208373039 +0000 UTC m=+138.217921740" watchObservedRunningTime="2025-12-08 17:42:41.240156884 +0000 UTC m=+138.249705585" Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.240739 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-xvpqj" podStartSLOduration=119.24073572 podStartE2EDuration="1m59.24073572s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:41.23889988 +0000 UTC m=+138.248448581" watchObservedRunningTime="2025-12-08 17:42:41.24073572 +0000 UTC m=+138.250284421" Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.279439 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" event={"ID":"7203b67e-ad3c-4af4-905c-eb6c92ceeed3","Type":"ContainerStarted","Data":"f152f20314876064bcba8cde262db7c40050bc57f0e41b544266dc97cdc31358"} Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.287774 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-w7gg4" podStartSLOduration=119.287754595 podStartE2EDuration="1m59.287754595s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:41.287093217 +0000 UTC m=+138.296641928" watchObservedRunningTime="2025-12-08 17:42:41.287754595 +0000 UTC m=+138.297303296" Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.289982 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" event={"ID":"500cfd87-2e0f-4321-a7d5-f19d851aafc9","Type":"ContainerStarted","Data":"d1cd6be54c10c5a96930fc703860c928c8ff6716a722399d390ded255966f993"} Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.305996 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.306374 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.806340775 +0000 UTC m=+138.815889476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.308134 5112 generic.go:358] "Generic (PLEG): container finished" podID="9b0063af-3ff2-4e04-81f7-56971d792d20" containerID="8d176f3a25176c2442bbf87627c9a4d6611b9a36b405153b3de38233f087437a" exitCode=0 Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.310608 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" event={"ID":"9b0063af-3ff2-4e04-81f7-56971d792d20","Type":"ContainerDied","Data":"8d176f3a25176c2442bbf87627c9a4d6611b9a36b405153b3de38233f087437a"} Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.314970 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.315361 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.815345987 +0000 UTC m=+138.824894688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.327457 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-cxzx8" podStartSLOduration=119.327434182 podStartE2EDuration="1m59.327434182s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:41.314854104 +0000 UTC m=+138.324402825" watchObservedRunningTime="2025-12-08 17:42:41.327434182 +0000 UTC m=+138.336982883" Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.344993 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8k2zp" Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.416831 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.419010 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.918988286 +0000 UTC m=+138.928536997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.521351 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.521802 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.021786612 +0000 UTC m=+139.031335313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.622598 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.622804 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.122752938 +0000 UTC m=+139.132301639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.623160 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.623563 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.12355287 +0000 UTC m=+139.133101571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.724590 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.724756 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.224730122 +0000 UTC m=+139.234278823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.724872 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.725363 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.225356599 +0000 UTC m=+139.234905300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.826537 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.826747 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.326718656 +0000 UTC m=+139.336267357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.827125 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.827443 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.327431415 +0000 UTC m=+139.336980116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.929210 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.929399 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.429373188 +0000 UTC m=+139.438921889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5112]: I1208 17:42:41.929481 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:41 crc kubenswrapper[5112]: E1208 17:42:41.929878 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.429866311 +0000 UTC m=+139.439415012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.030716 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:42 crc kubenswrapper[5112]: E1208 17:42:42.031200 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.531179978 +0000 UTC m=+139.540728679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.068257 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.085433 5112 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-p9hpg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:42 crc kubenswrapper[5112]: [-]has-synced failed: reason withheld Dec 08 17:42:42 crc kubenswrapper[5112]: [+]process-running ok Dec 08 17:42:42 crc kubenswrapper[5112]: healthz check failed Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.085507 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" podUID="65a17c30-dc44-43d4-8563-e5161462458c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.132259 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:42 crc kubenswrapper[5112]: E1208 17:42:42.132537 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.632525094 +0000 UTC m=+139.642073785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.234854 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:42 crc kubenswrapper[5112]: E1208 17:42:42.235153 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.735134015 +0000 UTC m=+139.744682716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.327709 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-4lrgt" event={"ID":"086a07f6-6e0f-4332-8724-d29c680a0ae5","Type":"ContainerStarted","Data":"8fdcfbdf1c2acbfc8865ea666bb1ad41db5b2ae2dde6ab6590084dfd8c8b7b27"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.333633 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" event={"ID":"b9deecbe-7b73-4e1a-8cca-ac79d53ae30f","Type":"ContainerStarted","Data":"77f9424491a2bde39003db21977ea9745e30923bc55080701903f3a4df0bc074"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.337134 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:42 crc kubenswrapper[5112]: E1208 17:42:42.337567 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.837552391 +0000 UTC m=+139.847101092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.351523 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-ws944" event={"ID":"009d3924-f028-4f36-9c85-df76d4ec0a70","Type":"ContainerStarted","Data":"0317ed1d354cf895e9738166e7fb87e91cbc4ab5683920176af58aa939adb20b"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.366545 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hjh5k" event={"ID":"a0286138-8763-49b9-b839-a6f8451a42df","Type":"ContainerStarted","Data":"1722e55de5f767afa757b60fdab6bbfb50f27471ed2d102d3bb646d84bd23881"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.389605 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" event={"ID":"9dd1e913-a30b-4f99-884f-db1d9526f7f5","Type":"ContainerStarted","Data":"9732ddaa40d8077394cc50f3901c7c1cb7f85990fc355998efccb597010f2e79"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.396163 5112 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-j76sm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.396230 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" podUID="9dd1e913-a30b-4f99-884f-db1d9526f7f5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.398803 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" event={"ID":"a1258bd8-1206-44b0-8eba-2d2ed9e8dc42","Type":"ContainerStarted","Data":"5296f00d067ba0fd7cde250ccce87a2acdd23b81b962630584ee915e24b2e724"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.405632 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" event={"ID":"04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee","Type":"ContainerStarted","Data":"ca24ae962d1aff39a763d4984f4458890f70486613a5e9cbf4853c79b2e474a3"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.406933 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z5hzx" event={"ID":"d4012bb8-5470-4545-9344-50a74df66572","Type":"ContainerStarted","Data":"719efa0759488b6f9ad7901a98734c402a090b46238e75e1bb01ff50c26fdea9"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.436065 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hjh5k" podStartSLOduration=8.436048022 podStartE2EDuration="8.436048022s" podCreationTimestamp="2025-12-08 17:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:42.413118175 +0000 UTC m=+139.422666886" watchObservedRunningTime="2025-12-08 17:42:42.436048022 +0000 UTC m=+139.445596723" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.436404 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-4lrgt" podStartSLOduration=120.436397901 podStartE2EDuration="2m0.436397901s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:42.369402088 +0000 UTC m=+139.378950789" watchObservedRunningTime="2025-12-08 17:42:42.436397901 +0000 UTC m=+139.445946602" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.438037 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:42 crc kubenswrapper[5112]: E1208 17:42:42.440954 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.940885942 +0000 UTC m=+139.950434643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.465613 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5" event={"ID":"136903e0-14bd-4e29-afb8-d552dc8eb9af","Type":"ContainerStarted","Data":"565c42323fda48990cd26b87b27f8d8ecba0cb1a78b30cfd6dd82e7a910cb52a"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.471957 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-qpqxs" podStartSLOduration=120.471931637 podStartE2EDuration="2m0.471931637s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:42.460005666 +0000 UTC m=+139.469554367" watchObservedRunningTime="2025-12-08 17:42:42.471931637 +0000 UTC m=+139.481480338" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.476778 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:42 crc kubenswrapper[5112]: E1208 17:42:42.477662 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.97764144 +0000 UTC m=+139.987190141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.514638 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" podStartSLOduration=120.514621005 podStartE2EDuration="2m0.514621005s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:42.4839402 +0000 UTC m=+139.493488921" watchObservedRunningTime="2025-12-08 17:42:42.514621005 +0000 UTC m=+139.524169696" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.536623 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-mpfh5" podStartSLOduration=120.536606327 podStartE2EDuration="2m0.536606327s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:42.514976125 +0000 UTC m=+139.524524836" watchObservedRunningTime="2025-12-08 17:42:42.536606327 +0000 UTC m=+139.546155028" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.537593 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" event={"ID":"02b6f45a-2d25-4712-b127-c1906f6fb154","Type":"ContainerStarted","Data":"856cc3996b9f4ef5ce4e91fe941959c716acef43452d73e51122c52c4b10dd1c"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.539683 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.566151 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" event={"ID":"cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd","Type":"ContainerStarted","Data":"dfa397b2fbe67411aefa3a1ba82d292e7c5afb1b0b6b03a66754b1259f0ae191"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.567103 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.569205 5112 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-v5t7z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.569276 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" podUID="02b6f45a-2d25-4712-b127-c1906f6fb154" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.591907 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:42 crc kubenswrapper[5112]: E1208 17:42:42.593692 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.093669122 +0000 UTC m=+140.103217823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.594975 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" event={"ID":"7adf44ec-4226-407e-85c7-bd8a5d9bbf0d","Type":"ContainerStarted","Data":"87016fa0407784346228d3f5ab3a368f64dd297783aaf20ff2ff04d0cc6fc079"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.597421 5112 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-ms6jw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.597470 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" podUID="cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.612498 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" event={"ID":"7203b67e-ad3c-4af4-905c-eb6c92ceeed3","Type":"ContainerStarted","Data":"542f413d26336f808d4cc808355574e7cde3b2c9c58f6dcb4efac8bf7e7befd1"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.612538 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" event={"ID":"7203b67e-ad3c-4af4-905c-eb6c92ceeed3","Type":"ContainerStarted","Data":"d15e1814d115eb52e848c6342811481fd7b9677e5d6c40c73153017cace69e94"} Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.612623 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" podStartSLOduration=120.612603291 podStartE2EDuration="2m0.612603291s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:42.611564583 +0000 UTC m=+139.621113294" watchObservedRunningTime="2025-12-08 17:42:42.612603291 +0000 UTC m=+139.622151992" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.613756 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" podStartSLOduration=120.613745712 podStartE2EDuration="2m0.613745712s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:42.569725118 +0000 UTC m=+139.579273819" watchObservedRunningTime="2025-12-08 17:42:42.613745712 +0000 UTC m=+139.623294413" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.669147 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.678286 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-kc9qw" podStartSLOduration=120.678269078 podStartE2EDuration="2m0.678269078s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:42.650116201 +0000 UTC m=+139.659664902" watchObservedRunningTime="2025-12-08 17:42:42.678269078 +0000 UTC m=+139.687817779" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.697323 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:42 crc kubenswrapper[5112]: E1208 17:42:42.698505 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.198489292 +0000 UTC m=+140.208037993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.733539 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-xwx7k" podStartSLOduration=120.733523685 podStartE2EDuration="2m0.733523685s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:42.682474391 +0000 UTC m=+139.692023092" watchObservedRunningTime="2025-12-08 17:42:42.733523685 +0000 UTC m=+139.743072386" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.735728 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-nrr58" Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.799019 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:42 crc kubenswrapper[5112]: E1208 17:42:42.800458 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.300441305 +0000 UTC m=+140.309990006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5112]: I1208 17:42:42.903849 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:42 crc kubenswrapper[5112]: E1208 17:42:42.904271 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.404258048 +0000 UTC m=+140.413806739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.004735 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:43 crc kubenswrapper[5112]: E1208 17:42:43.005066 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.50504828 +0000 UTC m=+140.514596981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.077358 5112 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-p9hpg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:43 crc kubenswrapper[5112]: [-]has-synced failed: reason withheld Dec 08 17:42:43 crc kubenswrapper[5112]: [+]process-running ok Dec 08 17:42:43 crc kubenswrapper[5112]: healthz check failed Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.077845 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" podUID="65a17c30-dc44-43d4-8563-e5161462458c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.107269 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:43 crc kubenswrapper[5112]: E1208 17:42:43.107636 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.60761711 +0000 UTC m=+140.617165811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.208828 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:43 crc kubenswrapper[5112]: E1208 17:42:43.209052 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.709017548 +0000 UTC m=+140.718566249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.209432 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:43 crc kubenswrapper[5112]: E1208 17:42:43.209786 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.709772559 +0000 UTC m=+140.719321260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.310988 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:43 crc kubenswrapper[5112]: E1208 17:42:43.311370 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.811351342 +0000 UTC m=+140.820900043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.387941 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-6gxxt"] Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.412774 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:43 crc kubenswrapper[5112]: E1208 17:42:43.413133 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.91311991 +0000 UTC m=+140.922668611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.514226 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:43 crc kubenswrapper[5112]: E1208 17:42:43.514685 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.014664752 +0000 UTC m=+141.024213453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.615547 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:43 crc kubenswrapper[5112]: E1208 17:42:43.615906 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.115893635 +0000 UTC m=+141.125442336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.633072 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-ws944" event={"ID":"009d3924-f028-4f36-9c85-df76d4ec0a70","Type":"ContainerStarted","Data":"db3136f062b54042621d07c358e91b9cfccc835c2b5c7b2345836f2eb820bd01"} Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.643021 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-zcf9f" event={"ID":"0e51fda2-d38e-45fc-aa7a-14fe47e53037","Type":"ContainerStarted","Data":"a0a5a39c54e0d9428c8a795a8639de48183e89486fa07fdaaec7d964265f588b"} Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.650701 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" event={"ID":"a1258bd8-1206-44b0-8eba-2d2ed9e8dc42","Type":"ContainerStarted","Data":"9702406bcaf94c5cd26f93f7aca34d29d7f7bf8908c1019d43dfaa6cb4de055c"} Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.651549 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.657014 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-ws944" podStartSLOduration=121.656999241 podStartE2EDuration="2m1.656999241s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:43.656258341 +0000 UTC m=+140.665807062" watchObservedRunningTime="2025-12-08 17:42:43.656999241 +0000 UTC m=+140.666547942" Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.658762 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" event={"ID":"04cba9fb-d24a-4d7a-b58c-edb03c2ee2ee","Type":"ContainerStarted","Data":"234aaba1cbdc86a5b017f321afa69b846dad30ab35b236de690eeef2cf976bb7"} Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.669300 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z5hzx" event={"ID":"d4012bb8-5470-4545-9344-50a74df66572","Type":"ContainerStarted","Data":"7a1c73e60736d783f51082ce8e5f61f8136dd3f6a9913b388ba9301abb84628f"} Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.669662 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.672407 5112 generic.go:358] "Generic (PLEG): container finished" podID="15777a2f-256b-4501-9856-749819a161a9" containerID="4a6deec5f8c35984055f2ab49b4a9d0045e47411d5f176b48bd8c482423e4b2d" exitCode=0 Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.672600 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" event={"ID":"15777a2f-256b-4501-9856-749819a161a9","Type":"ContainerDied","Data":"4a6deec5f8c35984055f2ab49b4a9d0045e47411d5f176b48bd8c482423e4b2d"} Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.687029 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" event={"ID":"9b0063af-3ff2-4e04-81f7-56971d792d20","Type":"ContainerStarted","Data":"d4fcd673eeb77a77b1f4d1a903ee6f430b2e8e23d247c5fb82065b00b4af7f73"} Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.687096 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.689576 5112 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-ms6jw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.689630 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" podUID="cd227b10-f0cc-41ae-a9bf-dc88e3f3d0cd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.700436 5112 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-v5t7z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.700520 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" podUID="02b6f45a-2d25-4712-b127-c1906f6fb154" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.703903 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" podStartSLOduration=121.703885453 podStartE2EDuration="2m1.703885453s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:43.700680477 +0000 UTC m=+140.710229178" watchObservedRunningTime="2025-12-08 17:42:43.703885453 +0000 UTC m=+140.713434154" Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.707179 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-j76sm" Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.717696 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:43 crc kubenswrapper[5112]: E1208 17:42:43.719166 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.219148333 +0000 UTC m=+141.228697034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.729449 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-z5hzx" podStartSLOduration=9.72942725 podStartE2EDuration="9.72942725s" podCreationTimestamp="2025-12-08 17:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:43.727845737 +0000 UTC m=+140.737394458" watchObservedRunningTime="2025-12-08 17:42:43.72942725 +0000 UTC m=+140.738975951" Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.759907 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" podStartSLOduration=121.75989315 podStartE2EDuration="2m1.75989315s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:43.758705928 +0000 UTC m=+140.768254629" watchObservedRunningTime="2025-12-08 17:42:43.75989315 +0000 UTC m=+140.769441851" Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.812791 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-fmwqx" podStartSLOduration=121.812771422 podStartE2EDuration="2m1.812771422s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:43.782757945 +0000 UTC m=+140.792306646" watchObservedRunningTime="2025-12-08 17:42:43.812771422 +0000 UTC m=+140.822320123" Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.819343 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:43 crc kubenswrapper[5112]: E1208 17:42:43.826968 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.326947484 +0000 UTC m=+141.336496205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.866341 5112 ???:1] "http: TLS handshake error from 192.168.126.11:53324: no serving certificate available for the kubelet" Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.921654 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:43 crc kubenswrapper[5112]: E1208 17:42:43.921940 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.421922979 +0000 UTC m=+141.431471680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5112]: I1208 17:42:43.952410 5112 ???:1] "http: TLS handshake error from 192.168.126.11:53336: no serving certificate available for the kubelet" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.015556 5112 ???:1] "http: TLS handshake error from 192.168.126.11:53350: no serving certificate available for the kubelet" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.023030 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:44 crc kubenswrapper[5112]: E1208 17:42:44.023406 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.523391269 +0000 UTC m=+141.532939970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.072037 5112 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-p9hpg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:44 crc kubenswrapper[5112]: [-]has-synced failed: reason withheld Dec 08 17:42:44 crc kubenswrapper[5112]: [+]process-running ok Dec 08 17:42:44 crc kubenswrapper[5112]: healthz check failed Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.072169 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" podUID="65a17c30-dc44-43d4-8563-e5161462458c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.104217 5112 ???:1] "http: TLS handshake error from 192.168.126.11:53366: no serving certificate available for the kubelet" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.124136 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:44 crc kubenswrapper[5112]: E1208 17:42:44.124328 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.624305145 +0000 UTC m=+141.633853846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.124386 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:44 crc kubenswrapper[5112]: E1208 17:42:44.124701 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.624689395 +0000 UTC m=+141.634238096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.218123 5112 ???:1] "http: TLS handshake error from 192.168.126.11:53368: no serving certificate available for the kubelet" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.225807 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:44 crc kubenswrapper[5112]: E1208 17:42:44.226008 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.725991841 +0000 UTC m=+141.735540542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.226100 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:44 crc kubenswrapper[5112]: E1208 17:42:44.226504 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.726485154 +0000 UTC m=+141.736033915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.321127 5112 ???:1] "http: TLS handshake error from 192.168.126.11:53382: no serving certificate available for the kubelet" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.326749 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:44 crc kubenswrapper[5112]: E1208 17:42:44.327326 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.827305667 +0000 UTC m=+141.836854368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.397997 5112 ???:1] "http: TLS handshake error from 192.168.126.11:53398: no serving certificate available for the kubelet" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.429174 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:44 crc kubenswrapper[5112]: E1208 17:42:44.429538 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.929522877 +0000 UTC m=+141.939071578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.514916 5112 ???:1] "http: TLS handshake error from 192.168.126.11:53402: no serving certificate available for the kubelet" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.530734 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:44 crc kubenswrapper[5112]: E1208 17:42:44.531192 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.031171172 +0000 UTC m=+142.040719873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.632353 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:44 crc kubenswrapper[5112]: E1208 17:42:44.632829 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.132807666 +0000 UTC m=+142.142356367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.708744 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" event={"ID":"ace9dd66-3bc5-4b64-afe3-4f05af28644c","Type":"ContainerStarted","Data":"acddc55e351979f98b80bd3d373a40c74bb7e2bd4f7007c6e12f725ed6509dd4"} Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.709636 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" podUID="234e7e70-7bb6-457f-a170-f1349602c58a" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053" gracePeriod=30 Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.713101 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zngdv"] Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.731034 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.733690 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.733903 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zngdv"] Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.734446 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:44 crc kubenswrapper[5112]: E1208 17:42:44.734797 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.234771469 +0000 UTC m=+142.244320170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.836848 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.836918 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-catalog-content\") pod \"certified-operators-zngdv\" (UID: \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\") " pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.837036 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-utilities\") pod \"certified-operators-zngdv\" (UID: \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\") " pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.837144 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75srw\" (UniqueName: \"kubernetes.io/projected/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-kube-api-access-75srw\") pod \"certified-operators-zngdv\" (UID: \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\") " pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:42:44 crc kubenswrapper[5112]: E1208 17:42:44.838590 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.338571202 +0000 UTC m=+142.348119903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.924666 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f4flg"] Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.937685 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4flg"] Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.938141 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.944599 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.946289 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:44 crc kubenswrapper[5112]: E1208 17:42:44.946341 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.446325821 +0000 UTC m=+142.455874512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.949640 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.949831 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-catalog-content\") pod \"certified-operators-zngdv\" (UID: \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\") " pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.950021 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-utilities\") pod \"certified-operators-zngdv\" (UID: \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\") " pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.950193 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-75srw\" (UniqueName: \"kubernetes.io/projected/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-kube-api-access-75srw\") pod \"certified-operators-zngdv\" (UID: \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\") " pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:42:44 crc kubenswrapper[5112]: E1208 17:42:44.950901 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.450884334 +0000 UTC m=+142.460433035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.951585 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-catalog-content\") pod \"certified-operators-zngdv\" (UID: \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\") " pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:42:44 crc kubenswrapper[5112]: I1208 17:42:44.951999 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-utilities\") pod \"certified-operators-zngdv\" (UID: \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\") " pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.000197 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-75srw\" (UniqueName: \"kubernetes.io/projected/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-kube-api-access-75srw\") pod \"certified-operators-zngdv\" (UID: \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\") " pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.054756 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.055694 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr6xv\" (UniqueName: \"kubernetes.io/projected/a8b663e6-709e-4802-8101-44c949911229-kube-api-access-rr6xv\") pod \"community-operators-f4flg\" (UID: \"a8b663e6-709e-4802-8101-44c949911229\") " pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.055782 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b663e6-709e-4802-8101-44c949911229-utilities\") pod \"community-operators-f4flg\" (UID: \"a8b663e6-709e-4802-8101-44c949911229\") " pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.055799 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b663e6-709e-4802-8101-44c949911229-catalog-content\") pod \"community-operators-f4flg\" (UID: \"a8b663e6-709e-4802-8101-44c949911229\") " pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:42:45 crc kubenswrapper[5112]: E1208 17:42:45.056421 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.556394663 +0000 UTC m=+142.565943364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.060683 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.085306 5112 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-p9hpg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:45 crc kubenswrapper[5112]: [-]has-synced failed: reason withheld Dec 08 17:42:45 crc kubenswrapper[5112]: [+]process-running ok Dec 08 17:42:45 crc kubenswrapper[5112]: healthz check failed Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.085394 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" podUID="65a17c30-dc44-43d4-8563-e5161462458c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.130858 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rvq22"] Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.151288 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.152633 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.156855 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rr6xv\" (UniqueName: \"kubernetes.io/projected/a8b663e6-709e-4802-8101-44c949911229-kube-api-access-rr6xv\") pod \"community-operators-f4flg\" (UID: \"a8b663e6-709e-4802-8101-44c949911229\") " pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.156932 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b663e6-709e-4802-8101-44c949911229-utilities\") pod \"community-operators-f4flg\" (UID: \"a8b663e6-709e-4802-8101-44c949911229\") " pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.156954 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b663e6-709e-4802-8101-44c949911229-catalog-content\") pod \"community-operators-f4flg\" (UID: \"a8b663e6-709e-4802-8101-44c949911229\") " pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.157006 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:45 crc kubenswrapper[5112]: E1208 17:42:45.157481 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.657465062 +0000 UTC m=+142.667013763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.157595 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b663e6-709e-4802-8101-44c949911229-catalog-content\") pod \"community-operators-f4flg\" (UID: \"a8b663e6-709e-4802-8101-44c949911229\") " pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.157865 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b663e6-709e-4802-8101-44c949911229-utilities\") pod \"community-operators-f4flg\" (UID: \"a8b663e6-709e-4802-8101-44c949911229\") " pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.164669 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvq22"] Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.193260 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr6xv\" (UniqueName: \"kubernetes.io/projected/a8b663e6-709e-4802-8101-44c949911229-kube-api-access-rr6xv\") pod \"community-operators-f4flg\" (UID: \"a8b663e6-709e-4802-8101-44c949911229\") " pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.218747 5112 ???:1] "http: TLS handshake error from 192.168.126.11:53406: no serving certificate available for the kubelet" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.270937 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54k67\" (UniqueName: \"kubernetes.io/projected/15777a2f-256b-4501-9856-749819a161a9-kube-api-access-54k67\") pod \"15777a2f-256b-4501-9856-749819a161a9\" (UID: \"15777a2f-256b-4501-9856-749819a161a9\") " Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.271170 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.271225 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15777a2f-256b-4501-9856-749819a161a9-config-volume\") pod \"15777a2f-256b-4501-9856-749819a161a9\" (UID: \"15777a2f-256b-4501-9856-749819a161a9\") " Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.271278 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15777a2f-256b-4501-9856-749819a161a9-secret-volume\") pod \"15777a2f-256b-4501-9856-749819a161a9\" (UID: \"15777a2f-256b-4501-9856-749819a161a9\") " Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.271449 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13583b7-7ad1-4129-8b1b-0ee32c5603df-utilities\") pod \"certified-operators-rvq22\" (UID: \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\") " pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.271682 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13583b7-7ad1-4129-8b1b-0ee32c5603df-catalog-content\") pod \"certified-operators-rvq22\" (UID: \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\") " pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.271749 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sgr4\" (UniqueName: \"kubernetes.io/projected/e13583b7-7ad1-4129-8b1b-0ee32c5603df-kube-api-access-6sgr4\") pod \"certified-operators-rvq22\" (UID: \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\") " pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:42:45 crc kubenswrapper[5112]: E1208 17:42:45.273283 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.773253247 +0000 UTC m=+142.782801948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.273797 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15777a2f-256b-4501-9856-749819a161a9-config-volume" (OuterVolumeSpecName: "config-volume") pod "15777a2f-256b-4501-9856-749819a161a9" (UID: "15777a2f-256b-4501-9856-749819a161a9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.287397 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.290453 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15777a2f-256b-4501-9856-749819a161a9-kube-api-access-54k67" (OuterVolumeSpecName: "kube-api-access-54k67") pod "15777a2f-256b-4501-9856-749819a161a9" (UID: "15777a2f-256b-4501-9856-749819a161a9"). InnerVolumeSpecName "kube-api-access-54k67". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.299812 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15777a2f-256b-4501-9856-749819a161a9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "15777a2f-256b-4501-9856-749819a161a9" (UID: "15777a2f-256b-4501-9856-749819a161a9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.340260 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.341378 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.373765 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6sgr4\" (UniqueName: \"kubernetes.io/projected/e13583b7-7ad1-4129-8b1b-0ee32c5603df-kube-api-access-6sgr4\") pod \"certified-operators-rvq22\" (UID: \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\") " pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.374349 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.374475 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.374621 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13583b7-7ad1-4129-8b1b-0ee32c5603df-utilities\") pod \"certified-operators-rvq22\" (UID: \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\") " pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.374727 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.374872 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13583b7-7ad1-4129-8b1b-0ee32c5603df-catalog-content\") pod \"certified-operators-rvq22\" (UID: \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\") " pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.374985 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.375121 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.375275 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-54k67\" (UniqueName: \"kubernetes.io/projected/15777a2f-256b-4501-9856-749819a161a9-kube-api-access-54k67\") on node \"crc\" DevicePath \"\"" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.375359 5112 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15777a2f-256b-4501-9856-749819a161a9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.375435 5112 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15777a2f-256b-4501-9856-749819a161a9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.376335 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qqqtw"] Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.376856 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15777a2f-256b-4501-9856-749819a161a9" containerName="collect-profiles" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.376867 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="15777a2f-256b-4501-9856-749819a161a9" containerName="collect-profiles" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.376939 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="15777a2f-256b-4501-9856-749819a161a9" containerName="collect-profiles" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.384904 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.385214 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13583b7-7ad1-4129-8b1b-0ee32c5603df-utilities\") pod \"certified-operators-rvq22\" (UID: \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\") " pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:42:45 crc kubenswrapper[5112]: E1208 17:42:45.386159 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.886141755 +0000 UTC m=+142.895690516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.386729 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13583b7-7ad1-4129-8b1b-0ee32c5603df-catalog-content\") pod \"certified-operators-rvq22\" (UID: \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\") " pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.390390 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.391335 5112 patch_prober.go:28] interesting pod/console-64d44f6ddf-m2rqt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.396509 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-m2rqt" podUID="e175a7a0-9b51-4b5d-b85a-dd604a3db837" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.393805 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.393696 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.397403 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.397506 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.401644 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.407111 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.408302 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qqqtw"] Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.429534 5112 patch_prober.go:28] interesting pod/downloads-747b44746d-gv282 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.429594 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-gv282" podUID="3b27b80a-df1a-4a29-82d6-384db5b6612e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.437297 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sgr4\" (UniqueName: \"kubernetes.io/projected/e13583b7-7ad1-4129-8b1b-0ee32c5603df-kube-api-access-6sgr4\") pod \"certified-operators-rvq22\" (UID: \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\") " pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.467689 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.497799 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.498013 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027b046b-01a1-48d8-a6b7-d03fd6509f1f-utilities\") pod \"community-operators-qqqtw\" (UID: \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\") " pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.498034 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpfw5\" (UniqueName: \"kubernetes.io/projected/027b046b-01a1-48d8-a6b7-d03fd6509f1f-kube-api-access-dpfw5\") pod \"community-operators-qqqtw\" (UID: \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\") " pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.498238 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027b046b-01a1-48d8-a6b7-d03fd6509f1f-catalog-content\") pod \"community-operators-qqqtw\" (UID: \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\") " pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.498292 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs\") pod \"network-metrics-daemon-7jq8h\" (UID: \"3c4fb553-8514-4194-847c-96d40f8b41e3\") " pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:45 crc kubenswrapper[5112]: E1208 17:42:45.502605 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.002583738 +0000 UTC m=+143.012132439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.513975 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3c4fb553-8514-4194-847c-96d40f8b41e3-metrics-certs\") pod \"network-metrics-daemon-7jq8h\" (UID: \"3c4fb553-8514-4194-847c-96d40f8b41e3\") " pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.544283 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jq8h" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.545141 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.556269 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.568361 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.601743 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.601830 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027b046b-01a1-48d8-a6b7-d03fd6509f1f-catalog-content\") pod \"community-operators-qqqtw\" (UID: \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\") " pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.601871 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027b046b-01a1-48d8-a6b7-d03fd6509f1f-utilities\") pod \"community-operators-qqqtw\" (UID: \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\") " pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.601888 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dpfw5\" (UniqueName: \"kubernetes.io/projected/027b046b-01a1-48d8-a6b7-d03fd6509f1f-kube-api-access-dpfw5\") pod \"community-operators-qqqtw\" (UID: \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\") " pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:42:45 crc kubenswrapper[5112]: E1208 17:42:45.602168 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.102150096 +0000 UTC m=+143.111698797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.602462 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027b046b-01a1-48d8-a6b7-d03fd6509f1f-catalog-content\") pod \"community-operators-qqqtw\" (UID: \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\") " pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.602506 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027b046b-01a1-48d8-a6b7-d03fd6509f1f-utilities\") pod \"community-operators-qqqtw\" (UID: \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\") " pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.636464 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpfw5\" (UniqueName: \"kubernetes.io/projected/027b046b-01a1-48d8-a6b7-d03fd6509f1f-kube-api-access-dpfw5\") pod \"community-operators-qqqtw\" (UID: \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\") " pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.707277 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ms6jw" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.708204 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:45 crc kubenswrapper[5112]: E1208 17:42:45.708692 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.208659702 +0000 UTC m=+143.218208403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.743885 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.767871 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.768397 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-8gf2b" event={"ID":"15777a2f-256b-4501-9856-749819a161a9","Type":"ContainerDied","Data":"92bfd7ed1663bfc97c038c97d29362b6475f5df29b7e1c6dd434b21e56c0b514"} Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.768505 5112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92bfd7ed1663bfc97c038c97d29362b6475f5df29b7e1c6dd434b21e56c0b514" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.778482 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-n6jr7" Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.812976 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:45 crc kubenswrapper[5112]: E1208 17:42:45.813660 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.313639687 +0000 UTC m=+143.323188388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.914065 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:45 crc kubenswrapper[5112]: E1208 17:42:45.914690 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.414662105 +0000 UTC m=+143.424210806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5112]: I1208 17:42:45.955636 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zngdv"] Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.017754 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:46 crc kubenswrapper[5112]: E1208 17:42:46.018218 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.518206 +0000 UTC m=+143.527754701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.024598 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvq22"] Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.091328 5112 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-p9hpg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:46 crc kubenswrapper[5112]: [-]has-synced failed: reason withheld Dec 08 17:42:46 crc kubenswrapper[5112]: [+]process-running ok Dec 08 17:42:46 crc kubenswrapper[5112]: healthz check failed Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.091757 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" podUID="65a17c30-dc44-43d4-8563-e5161462458c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.119637 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:46 crc kubenswrapper[5112]: E1208 17:42:46.124333 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.624301765 +0000 UTC m=+143.633850466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.165088 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4flg"] Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.228848 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:46 crc kubenswrapper[5112]: E1208 17:42:46.229278 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.729261039 +0000 UTC m=+143.738809740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.334695 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:46 crc kubenswrapper[5112]: E1208 17:42:46.335062 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.835037545 +0000 UTC m=+143.844586246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.360843 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.380970 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.386229 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.394582 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.394819 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 08 17:42:46 crc kubenswrapper[5112]: W1208 17:42:46.400345 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-b0f2a17281b14e999df3c0e871082041df2a39e8132dea2ef056d9ccf61fa50b WatchSource:0}: Error finding container b0f2a17281b14e999df3c0e871082041df2a39e8132dea2ef056d9ccf61fa50b: Status 404 returned error can't find the container with id b0f2a17281b14e999df3c0e871082041df2a39e8132dea2ef056d9ccf61fa50b Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.438281 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:46 crc kubenswrapper[5112]: E1208 17:42:46.438897 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.938871409 +0000 UTC m=+143.948420110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.491570 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qqqtw"] Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.529051 5112 ???:1] "http: TLS handshake error from 192.168.126.11:53418: no serving certificate available for the kubelet" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.541648 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:46 crc kubenswrapper[5112]: E1208 17:42:46.541748 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.041724766 +0000 UTC m=+144.051273467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.542217 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.542290 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bdcf563-b973-48ef-8c03-dbc3dc5eed6b-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.542341 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bdcf563-b973-48ef-8c03-dbc3dc5eed6b-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:46 crc kubenswrapper[5112]: E1208 17:42:46.542537 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.042524038 +0000 UTC m=+144.052072739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.642879 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.642990 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bdcf563-b973-48ef-8c03-dbc3dc5eed6b-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.643019 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bdcf563-b973-48ef-8c03-dbc3dc5eed6b-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.643177 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bdcf563-b973-48ef-8c03-dbc3dc5eed6b-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:46 crc kubenswrapper[5112]: E1208 17:42:46.643239 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.143225297 +0000 UTC m=+144.152773998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.676948 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bdcf563-b973-48ef-8c03-dbc3dc5eed6b-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.728388 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-phq66"] Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.732010 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.736431 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.746262 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:46 crc kubenswrapper[5112]: E1208 17:42:46.746638 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.246623719 +0000 UTC m=+144.256172420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.748134 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phq66"] Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.777114 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"ef838dfe86fad8f0abda4295f46283e22f0950b7af8ddf182e1404ea9981c302"} Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.777155 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"dec08aaeec7e223b61314fa153c12788b6dbbaa2709d0c4223f3c31cb59253b8"} Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.777393 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.781925 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7jq8h"] Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.783278 5112 generic.go:358] "Generic (PLEG): container finished" podID="ea80841c-bb81-4bd4-a6b4-dde2e04b9351" containerID="9910f689a2a781ea8426914accd5ea5e140b675aaca4bbd3d68727b2bbfa7512" exitCode=0 Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.783372 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zngdv" event={"ID":"ea80841c-bb81-4bd4-a6b4-dde2e04b9351","Type":"ContainerDied","Data":"9910f689a2a781ea8426914accd5ea5e140b675aaca4bbd3d68727b2bbfa7512"} Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.783401 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zngdv" event={"ID":"ea80841c-bb81-4bd4-a6b4-dde2e04b9351","Type":"ContainerStarted","Data":"5bfc8fededfc2f43caf2fa7eee69c1b39cf98023a765faa356f7ff1490bd52ff"} Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.792406 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqqtw" event={"ID":"027b046b-01a1-48d8-a6b7-d03fd6509f1f","Type":"ContainerStarted","Data":"bcb9a0d4c4d5583f28725e1ef0cf51fbdeb0097cc05eaff0f229b0c62615d120"} Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.809201 5112 generic.go:358] "Generic (PLEG): container finished" podID="e13583b7-7ad1-4129-8b1b-0ee32c5603df" containerID="7d9c3b55d77046972d1186cc63cf811faadd28d8b6c5a511b7567603b513b6c9" exitCode=0 Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.809325 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq22" event={"ID":"e13583b7-7ad1-4129-8b1b-0ee32c5603df","Type":"ContainerDied","Data":"7d9c3b55d77046972d1186cc63cf811faadd28d8b6c5a511b7567603b513b6c9"} Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.809357 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq22" event={"ID":"e13583b7-7ad1-4129-8b1b-0ee32c5603df","Type":"ContainerStarted","Data":"2fec56c15e1c85c85c8c0638b3d8bae70d14ac9c1842c805d9681197bc407837"} Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.822249 5112 generic.go:358] "Generic (PLEG): container finished" podID="a8b663e6-709e-4802-8101-44c949911229" containerID="7c9900c3b6f20b146013624c84cb3858a4baa5a0c4899db3405e1cf9c530b88b" exitCode=0 Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.822802 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4flg" event={"ID":"a8b663e6-709e-4802-8101-44c949911229","Type":"ContainerDied","Data":"7c9900c3b6f20b146013624c84cb3858a4baa5a0c4899db3405e1cf9c530b88b"} Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.822862 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4flg" event={"ID":"a8b663e6-709e-4802-8101-44c949911229","Type":"ContainerStarted","Data":"417f318c3d99736782f5125ebcd793b4c11018d95debeb99e3d59e0368d966db"} Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.827229 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"b0f2a17281b14e999df3c0e871082041df2a39e8132dea2ef056d9ccf61fa50b"} Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.847795 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.848061 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a649bd-963b-42eb-8283-2f6d98b54ef8-catalog-content\") pod \"redhat-marketplace-phq66\" (UID: \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\") " pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.848140 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a649bd-963b-42eb-8283-2f6d98b54ef8-utilities\") pod \"redhat-marketplace-phq66\" (UID: \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\") " pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.848166 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99gqn\" (UniqueName: \"kubernetes.io/projected/a4a649bd-963b-42eb-8283-2f6d98b54ef8-kube-api-access-99gqn\") pod \"redhat-marketplace-phq66\" (UID: \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\") " pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:42:46 crc kubenswrapper[5112]: E1208 17:42:46.848290 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.348274614 +0000 UTC m=+144.357823315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.872364 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.952331 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-99gqn\" (UniqueName: \"kubernetes.io/projected/a4a649bd-963b-42eb-8283-2f6d98b54ef8-kube-api-access-99gqn\") pod \"redhat-marketplace-phq66\" (UID: \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\") " pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.952716 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.952922 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a649bd-963b-42eb-8283-2f6d98b54ef8-catalog-content\") pod \"redhat-marketplace-phq66\" (UID: \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\") " pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:42:46 crc kubenswrapper[5112]: E1208 17:42:46.952953 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.45294076 +0000 UTC m=+144.462489461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.952987 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a649bd-963b-42eb-8283-2f6d98b54ef8-utilities\") pod \"redhat-marketplace-phq66\" (UID: \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\") " pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.953338 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a649bd-963b-42eb-8283-2f6d98b54ef8-utilities\") pod \"redhat-marketplace-phq66\" (UID: \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\") " pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.953339 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a649bd-963b-42eb-8283-2f6d98b54ef8-catalog-content\") pod \"redhat-marketplace-phq66\" (UID: \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\") " pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:42:46 crc kubenswrapper[5112]: I1208 17:42:46.975484 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-99gqn\" (UniqueName: \"kubernetes.io/projected/a4a649bd-963b-42eb-8283-2f6d98b54ef8-kube-api-access-99gqn\") pod \"redhat-marketplace-phq66\" (UID: \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\") " pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.061683 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:47 crc kubenswrapper[5112]: E1208 17:42:47.062407 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.562365515 +0000 UTC m=+144.571914226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.063039 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:47 crc kubenswrapper[5112]: E1208 17:42:47.064051 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.564024929 +0000 UTC m=+144.573573640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.074107 5112 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-p9hpg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:47 crc kubenswrapper[5112]: [-]has-synced failed: reason withheld Dec 08 17:42:47 crc kubenswrapper[5112]: [+]process-running ok Dec 08 17:42:47 crc kubenswrapper[5112]: healthz check failed Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.074165 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" podUID="65a17c30-dc44-43d4-8563-e5161462458c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.079320 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.105753 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.112220 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-7q49w" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.125351 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gsp8f"] Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.134038 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.139476 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsp8f"] Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.166500 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:47 crc kubenswrapper[5112]: E1208 17:42:47.166947 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.666926738 +0000 UTC m=+144.676475439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.167042 5112 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.269936 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hd6p\" (UniqueName: \"kubernetes.io/projected/fb826094-3e88-481d-bf22-ad5c3eb0f280-kube-api-access-2hd6p\") pod \"redhat-marketplace-gsp8f\" (UID: \"fb826094-3e88-481d-bf22-ad5c3eb0f280\") " pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.269993 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb826094-3e88-481d-bf22-ad5c3eb0f280-utilities\") pod \"redhat-marketplace-gsp8f\" (UID: \"fb826094-3e88-481d-bf22-ad5c3eb0f280\") " pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.270032 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.270105 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb826094-3e88-481d-bf22-ad5c3eb0f280-catalog-content\") pod \"redhat-marketplace-gsp8f\" (UID: \"fb826094-3e88-481d-bf22-ad5c3eb0f280\") " pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:42:47 crc kubenswrapper[5112]: E1208 17:42:47.270811 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.770796763 +0000 UTC m=+144.780345464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.371201 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:47 crc kubenswrapper[5112]: E1208 17:42:47.371353 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.871329958 +0000 UTC m=+144.880878659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.371769 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb826094-3e88-481d-bf22-ad5c3eb0f280-utilities\") pod \"redhat-marketplace-gsp8f\" (UID: \"fb826094-3e88-481d-bf22-ad5c3eb0f280\") " pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.371827 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.371871 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb826094-3e88-481d-bf22-ad5c3eb0f280-catalog-content\") pod \"redhat-marketplace-gsp8f\" (UID: \"fb826094-3e88-481d-bf22-ad5c3eb0f280\") " pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.371945 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2hd6p\" (UniqueName: \"kubernetes.io/projected/fb826094-3e88-481d-bf22-ad5c3eb0f280-kube-api-access-2hd6p\") pod \"redhat-marketplace-gsp8f\" (UID: \"fb826094-3e88-481d-bf22-ad5c3eb0f280\") " pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.372325 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb826094-3e88-481d-bf22-ad5c3eb0f280-utilities\") pod \"redhat-marketplace-gsp8f\" (UID: \"fb826094-3e88-481d-bf22-ad5c3eb0f280\") " pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:42:47 crc kubenswrapper[5112]: E1208 17:42:47.372524 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.872513409 +0000 UTC m=+144.882062120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-vpxb8" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.373193 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb826094-3e88-481d-bf22-ad5c3eb0f280-catalog-content\") pod \"redhat-marketplace-gsp8f\" (UID: \"fb826094-3e88-481d-bf22-ad5c3eb0f280\") " pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.407658 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hd6p\" (UniqueName: \"kubernetes.io/projected/fb826094-3e88-481d-bf22-ad5c3eb0f280-kube-api-access-2hd6p\") pod \"redhat-marketplace-gsp8f\" (UID: \"fb826094-3e88-481d-bf22-ad5c3eb0f280\") " pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.465232 5112 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-08T17:42:47.167065792Z","UUID":"cc1f1819-acb4-47cc-a239-5d57ca2dff10","Handler":null,"Name":"","Endpoint":""} Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.467804 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.473195 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:47 crc kubenswrapper[5112]: E1208 17:42:47.473674 5112 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.973653311 +0000 UTC m=+144.983202012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.474682 5112 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.474714 5112 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.572281 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-phq66"] Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.574475 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.717210 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-hc5xj" Dec 08 17:42:47 crc kubenswrapper[5112]: I1208 17:42:47.739353 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsp8f"] Dec 08 17:42:47 crc kubenswrapper[5112]: W1208 17:42:47.749252 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb826094_3e88_481d_bf22_ad5c3eb0f280.slice/crio-b1fa86a6340863b5d8576d5daf79440e06cad82dd9f4e779bee46e29f6030b27 WatchSource:0}: Error finding container b1fa86a6340863b5d8576d5daf79440e06cad82dd9f4e779bee46e29f6030b27: Status 404 returned error can't find the container with id b1fa86a6340863b5d8576d5daf79440e06cad82dd9f4e779bee46e29f6030b27 Dec 08 17:42:48 crc kubenswrapper[5112]: I1208 17:42:48.066924 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:48 crc kubenswrapper[5112]: I1208 17:42:48.070153 5112 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-p9hpg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:48 crc kubenswrapper[5112]: [-]has-synced failed: reason withheld Dec 08 17:42:48 crc kubenswrapper[5112]: [+]process-running ok Dec 08 17:42:48 crc kubenswrapper[5112]: healthz check failed Dec 08 17:42:48 crc kubenswrapper[5112]: I1208 17:42:48.070261 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" podUID="65a17c30-dc44-43d4-8563-e5161462458c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:48 crc kubenswrapper[5112]: I1208 17:42:48.101386 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4p756"] Dec 08 17:42:48 crc kubenswrapper[5112]: W1208 17:42:48.511489 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4a649bd_963b_42eb_8283_2f6d98b54ef8.slice/crio-4ebd3131d186688d02e678708c94a952c67db5da0eecf94944c79d4491925ac4 WatchSource:0}: Error finding container 4ebd3131d186688d02e678708c94a952c67db5da0eecf94944c79d4491925ac4: Status 404 returned error can't find the container with id 4ebd3131d186688d02e678708c94a952c67db5da0eecf94944c79d4491925ac4 Dec 08 17:42:48 crc kubenswrapper[5112]: I1208 17:42:48.525621 5112 generic.go:358] "Generic (PLEG): container finished" podID="027b046b-01a1-48d8-a6b7-d03fd6509f1f" containerID="a67ab10e6551881f65db43d5dabc235689344644f4ddfea17b4657e6adb854c2" exitCode=0 Dec 08 17:42:48 crc kubenswrapper[5112]: I1208 17:42:48.575205 5112 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 17:42:48 crc kubenswrapper[5112]: I1208 17:42:48.575278 5112 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:48 crc kubenswrapper[5112]: I1208 17:42:48.873461 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-vpxb8\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:48 crc kubenswrapper[5112]: I1208 17:42:48.890726 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:48 crc kubenswrapper[5112]: I1208 17:42:48.905942 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.028233 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4p756"] Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.028306 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"96110509e886639af34ed3cf7039a5b1af494d53599e62f814e7f93c6ef1582b"} Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.028343 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.028540 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.032925 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.041340 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.070931 5112 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-p9hpg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:49 crc kubenswrapper[5112]: [-]has-synced failed: reason withheld Dec 08 17:42:49 crc kubenswrapper[5112]: [+]process-running ok Dec 08 17:42:49 crc kubenswrapper[5112]: healthz check failed Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.071015 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" podUID="65a17c30-dc44-43d4-8563-e5161462458c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.133502 5112 ???:1] "http: TLS handshake error from 192.168.126.11:44670: no serving certificate available for the kubelet" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.196039 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-catalog-content\") pod \"redhat-operators-4p756\" (UID: \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\") " pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.198143 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-utilities\") pod \"redhat-operators-4p756\" (UID: \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\") " pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.198185 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxbnf\" (UniqueName: \"kubernetes.io/projected/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-kube-api-access-mxbnf\") pod \"redhat-operators-4p756\" (UID: \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\") " pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.299435 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-catalog-content\") pod \"redhat-operators-4p756\" (UID: \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\") " pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.299520 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-utilities\") pod \"redhat-operators-4p756\" (UID: \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\") " pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.299550 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mxbnf\" (UniqueName: \"kubernetes.io/projected/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-kube-api-access-mxbnf\") pod \"redhat-operators-4p756\" (UID: \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\") " pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.300065 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-catalog-content\") pod \"redhat-operators-4p756\" (UID: \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\") " pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.301041 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-utilities\") pod \"redhat-operators-4p756\" (UID: \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\") " pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.322452 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxbnf\" (UniqueName: \"kubernetes.io/projected/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-kube-api-access-mxbnf\") pod \"redhat-operators-4p756\" (UID: \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\") " pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.396049 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.459768 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.462102 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.463254 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.471437 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.476543 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.476782 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5llh9"] Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.603756 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8fcd7359-28db-4b18-8d86-eb663b9a3807-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"8fcd7359-28db-4b18-8d86-eb663b9a3807\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.603821 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8fcd7359-28db-4b18-8d86-eb663b9a3807-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"8fcd7359-28db-4b18-8d86-eb663b9a3807\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.655837 5112 patch_prober.go:28] interesting pod/downloads-747b44746d-gv282 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.655959 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-gv282" podUID="3b27b80a-df1a-4a29-82d6-384db5b6612e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.705159 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8fcd7359-28db-4b18-8d86-eb663b9a3807-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"8fcd7359-28db-4b18-8d86-eb663b9a3807\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.705241 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8fcd7359-28db-4b18-8d86-eb663b9a3807-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"8fcd7359-28db-4b18-8d86-eb663b9a3807\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.705378 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8fcd7359-28db-4b18-8d86-eb663b9a3807-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"8fcd7359-28db-4b18-8d86-eb663b9a3807\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.724466 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8fcd7359-28db-4b18-8d86-eb663b9a3807-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"8fcd7359-28db-4b18-8d86-eb663b9a3807\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:49 crc kubenswrapper[5112]: I1208 17:42:49.788451 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:50 crc kubenswrapper[5112]: W1208 17:42:50.009893 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8fcd7359_28db_4b18_8d86_eb663b9a3807.slice/crio-4a00495089c1279ec1b3cc5e2eaabe3e2d9ab8e60fdd421b62581e0d07905c4b WatchSource:0}: Error finding container 4a00495089c1279ec1b3cc5e2eaabe3e2d9ab8e60fdd421b62581e0d07905c4b: Status 404 returned error can't find the container with id 4a00495089c1279ec1b3cc5e2eaabe3e2d9ab8e60fdd421b62581e0d07905c4b Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.070110 5112 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-p9hpg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:50 crc kubenswrapper[5112]: [-]has-synced failed: reason withheld Dec 08 17:42:50 crc kubenswrapper[5112]: [+]process-running ok Dec 08 17:42:50 crc kubenswrapper[5112]: healthz check failed Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.070200 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" podUID="65a17c30-dc44-43d4-8563-e5161462458c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439470 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsp8f" event={"ID":"fb826094-3e88-481d-bf22-ad5c3eb0f280","Type":"ContainerStarted","Data":"b1fa86a6340863b5d8576d5daf79440e06cad82dd9f4e779bee46e29f6030b27"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439796 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5llh9"] Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439817 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" event={"ID":"3c4fb553-8514-4194-847c-96d40f8b41e3","Type":"ContainerStarted","Data":"9d2479c0ffb94195b9b63e793c1cfeda438a0fe42ef80cce45a1f43db8f857e8"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439831 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" event={"ID":"3c4fb553-8514-4194-847c-96d40f8b41e3","Type":"ContainerStarted","Data":"c3362ab3f5035038404babfd24814b14856b3af1b6a4b9296464f53b12f422d7"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439843 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"eb93d7dbb96a3cceea9559dac7ccd64ddc3b9efc931a648c1d1221d67cc5436b"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439651 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439874 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-vpxb8"] Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439889 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4p756"] Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439901 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"e36ce560eb194b3e68ba8c4fe21a80138312f51d0ad62d28aef0be3befef5312"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439915 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b","Type":"ContainerStarted","Data":"b6ffc758722312900c9105e5af060a6b7a60429061fbe6abace773787691dc2d"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439932 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439951 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqqtw" event={"ID":"027b046b-01a1-48d8-a6b7-d03fd6509f1f","Type":"ContainerDied","Data":"a67ab10e6551881f65db43d5dabc235689344644f4ddfea17b4657e6adb854c2"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439969 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" event={"ID":"ace9dd66-3bc5-4b64-afe3-4f05af28644c","Type":"ContainerStarted","Data":"72e04fca3b64ccbab2497f64b4e5c035f32341f15fa01d8b1769a50c4032272c"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439984 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" event={"ID":"ace9dd66-3bc5-4b64-afe3-4f05af28644c","Type":"ContainerStarted","Data":"6888e7e38a3cf89c391c8ef1633fc4c7a5a6b09b823e2305646f27eacedc20b8"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.439996 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phq66" event={"ID":"a4a649bd-963b-42eb-8283-2f6d98b54ef8","Type":"ContainerStarted","Data":"4ebd3131d186688d02e678708c94a952c67db5da0eecf94944c79d4491925ac4"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.440009 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b","Type":"ContainerStarted","Data":"1e408eadff6baaf63c2c02c3836f5e665091333098fb9a4fd0c4e57da7884e03"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.557564 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4p756" event={"ID":"36b34f0a-51c8-41d9-a61c-dbc0104bea5d","Type":"ContainerStarted","Data":"795f631ba23772bde690f32401b15427628d71c86675a5e4e1b4e8e20f7c7dce"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.559549 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" event={"ID":"2ea5f194-6a0d-4339-9c15-bde6d3ca1540","Type":"ContainerStarted","Data":"40aceaf3237317514299eaa17381a2618c7ee9d07e97c546e7d8bc1f2f20907b"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.561314 5112 generic.go:358] "Generic (PLEG): container finished" podID="fb826094-3e88-481d-bf22-ad5c3eb0f280" containerID="2b10d326d476f9bd22357ec4d5827249a676c88a3e41665499d4bfe746867c32" exitCode=0 Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.561396 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsp8f" event={"ID":"fb826094-3e88-481d-bf22-ad5c3eb0f280","Type":"ContainerDied","Data":"2b10d326d476f9bd22357ec4d5827249a676c88a3e41665499d4bfe746867c32"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.562846 5112 generic.go:358] "Generic (PLEG): container finished" podID="a4a649bd-963b-42eb-8283-2f6d98b54ef8" containerID="179e63f3e82b818415472bc39abb3624b25eadae79e49f48e8a12d6067e8efe3" exitCode=0 Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.562947 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phq66" event={"ID":"a4a649bd-963b-42eb-8283-2f6d98b54ef8","Type":"ContainerDied","Data":"179e63f3e82b818415472bc39abb3624b25eadae79e49f48e8a12d6067e8efe3"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.568663 5112 generic.go:358] "Generic (PLEG): container finished" podID="9bdcf563-b973-48ef-8c03-dbc3dc5eed6b" containerID="1e408eadff6baaf63c2c02c3836f5e665091333098fb9a4fd0c4e57da7884e03" exitCode=0 Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.568896 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b","Type":"ContainerDied","Data":"1e408eadff6baaf63c2c02c3836f5e665091333098fb9a4fd0c4e57da7884e03"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.570459 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"8fcd7359-28db-4b18-8d86-eb663b9a3807","Type":"ContainerStarted","Data":"4a00495089c1279ec1b3cc5e2eaabe3e2d9ab8e60fdd421b62581e0d07905c4b"} Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.619398 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpw5l\" (UniqueName: \"kubernetes.io/projected/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-kube-api-access-xpw5l\") pod \"redhat-operators-5llh9\" (UID: \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\") " pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.619672 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-utilities\") pod \"redhat-operators-5llh9\" (UID: \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\") " pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.619766 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-catalog-content\") pod \"redhat-operators-5llh9\" (UID: \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\") " pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.720895 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xpw5l\" (UniqueName: \"kubernetes.io/projected/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-kube-api-access-xpw5l\") pod \"redhat-operators-5llh9\" (UID: \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\") " pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.721012 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-utilities\") pod \"redhat-operators-5llh9\" (UID: \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\") " pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.721046 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-catalog-content\") pod \"redhat-operators-5llh9\" (UID: \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\") " pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.721518 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-catalog-content\") pod \"redhat-operators-5llh9\" (UID: \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\") " pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.722564 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-utilities\") pod \"redhat-operators-5llh9\" (UID: \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\") " pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.745623 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpw5l\" (UniqueName: \"kubernetes.io/projected/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-kube-api-access-xpw5l\") pod \"redhat-operators-5llh9\" (UID: \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\") " pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:42:50 crc kubenswrapper[5112]: I1208 17:42:50.888708 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.071173 5112 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-p9hpg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:51 crc kubenswrapper[5112]: [-]has-synced failed: reason withheld Dec 08 17:42:51 crc kubenswrapper[5112]: [+]process-running ok Dec 08 17:42:51 crc kubenswrapper[5112]: healthz check failed Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.071240 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" podUID="65a17c30-dc44-43d4-8563-e5161462458c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.154203 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5llh9"] Dec 08 17:42:51 crc kubenswrapper[5112]: W1208 17:42:51.157657 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f6a3ac4_dcc2_4fbd_8699_d97127b35495.slice/crio-1562d65c93d960de7fe1da3242cd9bbe172cd9b1c25bd80fa8fd413d7f99b2ef WatchSource:0}: Error finding container 1562d65c93d960de7fe1da3242cd9bbe172cd9b1c25bd80fa8fd413d7f99b2ef: Status 404 returned error can't find the container with id 1562d65c93d960de7fe1da3242cd9bbe172cd9b1c25bd80fa8fd413d7f99b2ef Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.583040 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" event={"ID":"ace9dd66-3bc5-4b64-afe3-4f05af28644c","Type":"ContainerStarted","Data":"9a26ef5d64e8c68ce956f9b45b35c463179c1b8996e6c455d22fc1c1cd0c63e8"} Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.588435 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5llh9" event={"ID":"0f6a3ac4-dcc2-4fbd-8699-d97127b35495","Type":"ContainerStarted","Data":"1562d65c93d960de7fe1da3242cd9bbe172cd9b1c25bd80fa8fd413d7f99b2ef"} Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.591741 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7jq8h" event={"ID":"3c4fb553-8514-4194-847c-96d40f8b41e3","Type":"ContainerStarted","Data":"ad3f8a06b7f408acba4634a22a9bb0183ea467f809c910b098b7c4c8a56f0cd1"} Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.738719 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7jq8h" podStartSLOduration=129.738697765 podStartE2EDuration="2m9.738697765s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:51.736934898 +0000 UTC m=+148.746483619" watchObservedRunningTime="2025-12-08 17:42:51.738697765 +0000 UTC m=+148.748246466" Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.832375 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.872286 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bdcf563-b973-48ef-8c03-dbc3dc5eed6b-kube-api-access\") pod \"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b\" (UID: \"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b\") " Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.872347 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bdcf563-b973-48ef-8c03-dbc3dc5eed6b-kubelet-dir\") pod \"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b\" (UID: \"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b\") " Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.872721 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bdcf563-b973-48ef-8c03-dbc3dc5eed6b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9bdcf563-b973-48ef-8c03-dbc3dc5eed6b" (UID: "9bdcf563-b973-48ef-8c03-dbc3dc5eed6b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.885615 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bdcf563-b973-48ef-8c03-dbc3dc5eed6b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9bdcf563-b973-48ef-8c03-dbc3dc5eed6b" (UID: "9bdcf563-b973-48ef-8c03-dbc3dc5eed6b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.974675 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bdcf563-b973-48ef-8c03-dbc3dc5eed6b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.974709 5112 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bdcf563-b973-48ef-8c03-dbc3dc5eed6b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:42:51 crc kubenswrapper[5112]: I1208 17:42:51.997894 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.173564 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.176520 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-p9hpg" Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.617184 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" event={"ID":"2ea5f194-6a0d-4339-9c15-bde6d3ca1540","Type":"ContainerStarted","Data":"3f5fd0a020aac31cd78f6c9bf9e4ad429957baa32a1a5a56cf140afb4e534d94"} Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.617655 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:42:52 crc kubenswrapper[5112]: E1208 17:42:52.619277 5112 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.619809 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"9bdcf563-b973-48ef-8c03-dbc3dc5eed6b","Type":"ContainerDied","Data":"b6ffc758722312900c9105e5af060a6b7a60429061fbe6abace773787691dc2d"} Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.619835 5112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6ffc758722312900c9105e5af060a6b7a60429061fbe6abace773787691dc2d" Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.619900 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:52 crc kubenswrapper[5112]: E1208 17:42:52.621698 5112 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.622387 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"8fcd7359-28db-4b18-8d86-eb663b9a3807","Type":"ContainerStarted","Data":"c1bc4cf34b8f70c8d0dcd1cbdc783746a57d9f54a024590441a0ae5c3f9e056d"} Dec 08 17:42:52 crc kubenswrapper[5112]: E1208 17:42:52.624188 5112 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:42:52 crc kubenswrapper[5112]: E1208 17:42:52.624236 5112 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" podUID="234e7e70-7bb6-457f-a170-f1349602c58a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.625487 5112 generic.go:358] "Generic (PLEG): container finished" podID="0f6a3ac4-dcc2-4fbd-8699-d97127b35495" containerID="32028fc43196a83f96b2f1130e2802dee4481cc57b3177b9eec338667f9c0110" exitCode=0 Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.625612 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5llh9" event={"ID":"0f6a3ac4-dcc2-4fbd-8699-d97127b35495","Type":"ContainerDied","Data":"32028fc43196a83f96b2f1130e2802dee4481cc57b3177b9eec338667f9c0110"} Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.628798 5112 generic.go:358] "Generic (PLEG): container finished" podID="36b34f0a-51c8-41d9-a61c-dbc0104bea5d" containerID="51f86c3fa8e40b9d10281b824a2768a176626627cdeb402f0c46e768d4aedd3f" exitCode=0 Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.628916 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4p756" event={"ID":"36b34f0a-51c8-41d9-a61c-dbc0104bea5d","Type":"ContainerDied","Data":"51f86c3fa8e40b9d10281b824a2768a176626627cdeb402f0c46e768d4aedd3f"} Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.638592 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" podStartSLOduration=130.638575677 podStartE2EDuration="2m10.638575677s" podCreationTimestamp="2025-12-08 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:52.637453586 +0000 UTC m=+149.647002297" watchObservedRunningTime="2025-12-08 17:42:52.638575677 +0000 UTC m=+149.648124378" Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.657668 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-p4h9p" podStartSLOduration=18.65764857 podStartE2EDuration="18.65764857s" podCreationTimestamp="2025-12-08 17:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:52.654167686 +0000 UTC m=+149.663716397" watchObservedRunningTime="2025-12-08 17:42:52.65764857 +0000 UTC m=+149.667197271" Dec 08 17:42:52 crc kubenswrapper[5112]: I1208 17:42:52.696424 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=4.696407703 podStartE2EDuration="4.696407703s" podCreationTimestamp="2025-12-08 17:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:52.695451657 +0000 UTC m=+149.705000378" watchObservedRunningTime="2025-12-08 17:42:52.696407703 +0000 UTC m=+149.705956404" Dec 08 17:42:53 crc kubenswrapper[5112]: I1208 17:42:53.636622 5112 generic.go:358] "Generic (PLEG): container finished" podID="8fcd7359-28db-4b18-8d86-eb663b9a3807" containerID="c1bc4cf34b8f70c8d0dcd1cbdc783746a57d9f54a024590441a0ae5c3f9e056d" exitCode=0 Dec 08 17:42:53 crc kubenswrapper[5112]: I1208 17:42:53.636748 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"8fcd7359-28db-4b18-8d86-eb663b9a3807","Type":"ContainerDied","Data":"c1bc4cf34b8f70c8d0dcd1cbdc783746a57d9f54a024590441a0ae5c3f9e056d"} Dec 08 17:42:53 crc kubenswrapper[5112]: I1208 17:42:53.691469 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:42:53 crc kubenswrapper[5112]: I1208 17:42:53.722445 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-z5hzx" Dec 08 17:42:54 crc kubenswrapper[5112]: I1208 17:42:54.276669 5112 ???:1] "http: TLS handshake error from 192.168.126.11:44676: no serving certificate available for the kubelet" Dec 08 17:42:55 crc kubenswrapper[5112]: I1208 17:42:55.157889 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:55 crc kubenswrapper[5112]: I1208 17:42:55.349177 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8fcd7359-28db-4b18-8d86-eb663b9a3807-kube-api-access\") pod \"8fcd7359-28db-4b18-8d86-eb663b9a3807\" (UID: \"8fcd7359-28db-4b18-8d86-eb663b9a3807\") " Dec 08 17:42:55 crc kubenswrapper[5112]: I1208 17:42:55.349277 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8fcd7359-28db-4b18-8d86-eb663b9a3807-kubelet-dir\") pod \"8fcd7359-28db-4b18-8d86-eb663b9a3807\" (UID: \"8fcd7359-28db-4b18-8d86-eb663b9a3807\") " Dec 08 17:42:55 crc kubenswrapper[5112]: I1208 17:42:55.349580 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fcd7359-28db-4b18-8d86-eb663b9a3807-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8fcd7359-28db-4b18-8d86-eb663b9a3807" (UID: "8fcd7359-28db-4b18-8d86-eb663b9a3807"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:42:55 crc kubenswrapper[5112]: I1208 17:42:55.356867 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fcd7359-28db-4b18-8d86-eb663b9a3807-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8fcd7359-28db-4b18-8d86-eb663b9a3807" (UID: "8fcd7359-28db-4b18-8d86-eb663b9a3807"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:42:55 crc kubenswrapper[5112]: I1208 17:42:55.387766 5112 patch_prober.go:28] interesting pod/console-64d44f6ddf-m2rqt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 08 17:42:55 crc kubenswrapper[5112]: I1208 17:42:55.387842 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-m2rqt" podUID="e175a7a0-9b51-4b5d-b85a-dd604a3db837" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 08 17:42:55 crc kubenswrapper[5112]: I1208 17:42:55.450559 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8fcd7359-28db-4b18-8d86-eb663b9a3807-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:42:55 crc kubenswrapper[5112]: I1208 17:42:55.450590 5112 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8fcd7359-28db-4b18-8d86-eb663b9a3807-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:42:55 crc kubenswrapper[5112]: I1208 17:42:55.729824 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"8fcd7359-28db-4b18-8d86-eb663b9a3807","Type":"ContainerDied","Data":"4a00495089c1279ec1b3cc5e2eaabe3e2d9ab8e60fdd421b62581e0d07905c4b"} Dec 08 17:42:55 crc kubenswrapper[5112]: I1208 17:42:55.729868 5112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a00495089c1279ec1b3cc5e2eaabe3e2d9ab8e60fdd421b62581e0d07905c4b" Dec 08 17:42:55 crc kubenswrapper[5112]: I1208 17:42:55.729986 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:59 crc kubenswrapper[5112]: I1208 17:42:59.657690 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-gv282" Dec 08 17:43:02 crc kubenswrapper[5112]: E1208 17:43:02.618308 5112 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:02 crc kubenswrapper[5112]: E1208 17:43:02.620040 5112 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:02 crc kubenswrapper[5112]: E1208 17:43:02.621304 5112 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:02 crc kubenswrapper[5112]: E1208 17:43:02.621384 5112 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" podUID="234e7e70-7bb6-457f-a170-f1349602c58a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:43:04 crc kubenswrapper[5112]: I1208 17:43:04.555696 5112 ???:1] "http: TLS handshake error from 192.168.126.11:50208: no serving certificate available for the kubelet" Dec 08 17:43:05 crc kubenswrapper[5112]: I1208 17:43:05.410404 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:43:05 crc kubenswrapper[5112]: I1208 17:43:05.418044 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-m2rqt" Dec 08 17:43:05 crc kubenswrapper[5112]: I1208 17:43:05.781370 5112 generic.go:358] "Generic (PLEG): container finished" podID="e13583b7-7ad1-4129-8b1b-0ee32c5603df" containerID="66892dceb412ecd7803975e277354d1bcc173a31c832a44bf99fb32588a17b57" exitCode=0 Dec 08 17:43:05 crc kubenswrapper[5112]: I1208 17:43:05.781539 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq22" event={"ID":"e13583b7-7ad1-4129-8b1b-0ee32c5603df","Type":"ContainerDied","Data":"66892dceb412ecd7803975e277354d1bcc173a31c832a44bf99fb32588a17b57"} Dec 08 17:43:05 crc kubenswrapper[5112]: I1208 17:43:05.784743 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4flg" event={"ID":"a8b663e6-709e-4802-8101-44c949911229","Type":"ContainerStarted","Data":"984bbfb75141638f8b9bc93942d54a4a539ca9e3b89a7dab90392d1ab0ed44a2"} Dec 08 17:43:06 crc kubenswrapper[5112]: I1208 17:43:06.796670 5112 generic.go:358] "Generic (PLEG): container finished" podID="a8b663e6-709e-4802-8101-44c949911229" containerID="984bbfb75141638f8b9bc93942d54a4a539ca9e3b89a7dab90392d1ab0ed44a2" exitCode=0 Dec 08 17:43:06 crc kubenswrapper[5112]: I1208 17:43:06.796937 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4flg" event={"ID":"a8b663e6-709e-4802-8101-44c949911229","Type":"ContainerDied","Data":"984bbfb75141638f8b9bc93942d54a4a539ca9e3b89a7dab90392d1ab0ed44a2"} Dec 08 17:43:08 crc kubenswrapper[5112]: I1208 17:43:08.851324 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqqtw" event={"ID":"027b046b-01a1-48d8-a6b7-d03fd6509f1f","Type":"ContainerStarted","Data":"c5135dd242a8fdfa37be6f9408ab8de9691fc6e172272f13f9a2c91b1c554fce"} Dec 08 17:43:08 crc kubenswrapper[5112]: I1208 17:43:08.896171 5112 generic.go:358] "Generic (PLEG): container finished" podID="fb826094-3e88-481d-bf22-ad5c3eb0f280" containerID="563195da28fa066f9e7f3c39b2ada339260ed92161fee935978b5cf279776ff6" exitCode=0 Dec 08 17:43:08 crc kubenswrapper[5112]: I1208 17:43:08.896253 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsp8f" event={"ID":"fb826094-3e88-481d-bf22-ad5c3eb0f280","Type":"ContainerDied","Data":"563195da28fa066f9e7f3c39b2ada339260ed92161fee935978b5cf279776ff6"} Dec 08 17:43:08 crc kubenswrapper[5112]: I1208 17:43:08.905706 5112 generic.go:358] "Generic (PLEG): container finished" podID="a4a649bd-963b-42eb-8283-2f6d98b54ef8" containerID="da2708a442b70e7319fcdab69b796e15f30bffc79fc26d6fc4d2cfbc4a12e58c" exitCode=0 Dec 08 17:43:08 crc kubenswrapper[5112]: I1208 17:43:08.905894 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phq66" event={"ID":"a4a649bd-963b-42eb-8283-2f6d98b54ef8","Type":"ContainerDied","Data":"da2708a442b70e7319fcdab69b796e15f30bffc79fc26d6fc4d2cfbc4a12e58c"} Dec 08 17:43:08 crc kubenswrapper[5112]: I1208 17:43:08.909494 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zngdv" event={"ID":"ea80841c-bb81-4bd4-a6b4-dde2e04b9351","Type":"ContainerStarted","Data":"9107d51784bb4e353e9753278cfa059c660e0e3bc0273bdbe08024c231c96b3a"} Dec 08 17:43:09 crc kubenswrapper[5112]: I1208 17:43:09.916842 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq22" event={"ID":"e13583b7-7ad1-4129-8b1b-0ee32c5603df","Type":"ContainerStarted","Data":"21bb165ac8be11b715658d9094a501f7750f97da9b8e03b81a05f075c5638daa"} Dec 08 17:43:10 crc kubenswrapper[5112]: I1208 17:43:10.924894 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4p756" event={"ID":"36b34f0a-51c8-41d9-a61c-dbc0104bea5d","Type":"ContainerStarted","Data":"fe0065443d2203b8506a3a316725cd4302de3bc871f18d4f8c46b63fcd9c3ff7"} Dec 08 17:43:12 crc kubenswrapper[5112]: E1208 17:43:12.618410 5112 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:12 crc kubenswrapper[5112]: E1208 17:43:12.621889 5112 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:12 crc kubenswrapper[5112]: E1208 17:43:12.635070 5112 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:12 crc kubenswrapper[5112]: E1208 17:43:12.635233 5112 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" podUID="234e7e70-7bb6-457f-a170-f1349602c58a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:43:13 crc kubenswrapper[5112]: I1208 17:43:13.063549 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5llh9" event={"ID":"0f6a3ac4-dcc2-4fbd-8699-d97127b35495","Type":"ContainerStarted","Data":"df139d1fa3324ea4905697862db3a041d7a4dfffb695bc78532818bc83b7d109"} Dec 08 17:43:13 crc kubenswrapper[5112]: I1208 17:43:13.507185 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rvq22" podStartSLOduration=10.488955559 podStartE2EDuration="28.507170954s" podCreationTimestamp="2025-12-08 17:42:45 +0000 UTC" firstStartedPulling="2025-12-08 17:42:46.81021327 +0000 UTC m=+143.819761971" lastFinishedPulling="2025-12-08 17:43:04.828428625 +0000 UTC m=+161.837977366" observedRunningTime="2025-12-08 17:43:13.503181797 +0000 UTC m=+170.512730498" watchObservedRunningTime="2025-12-08 17:43:13.507170954 +0000 UTC m=+170.516719655" Dec 08 17:43:13 crc kubenswrapper[5112]: I1208 17:43:13.641953 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:43:14 crc kubenswrapper[5112]: I1208 17:43:14.071401 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4flg" event={"ID":"a8b663e6-709e-4802-8101-44c949911229","Type":"ContainerStarted","Data":"42d38998a7c0c716b825c64305b20e22c7508abdbbabc644657925d9a971a778"} Dec 08 17:43:15 crc kubenswrapper[5112]: I1208 17:43:15.468046 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:43:15 crc kubenswrapper[5112]: I1208 17:43:15.468131 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:43:15 crc kubenswrapper[5112]: I1208 17:43:15.778998 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-2kppn" Dec 08 17:43:16 crc kubenswrapper[5112]: I1208 17:43:16.250719 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.095776 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phq66" event={"ID":"a4a649bd-963b-42eb-8283-2f6d98b54ef8","Type":"ContainerStarted","Data":"9adef01c0784afb97c60f66a2ea1fd723f38749e4d46b875452f46ccfbc0a0cb"} Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.099800 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-6gxxt_234e7e70-7bb6-457f-a170-f1349602c58a/kube-multus-additional-cni-plugins/0.log" Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.099866 5112 generic.go:358] "Generic (PLEG): container finished" podID="234e7e70-7bb6-457f-a170-f1349602c58a" containerID="6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053" exitCode=137 Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.099956 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" event={"ID":"234e7e70-7bb6-457f-a170-f1349602c58a","Type":"ContainerDied","Data":"6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053"} Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.102603 5112 generic.go:358] "Generic (PLEG): container finished" podID="ea80841c-bb81-4bd4-a6b4-dde2e04b9351" containerID="9107d51784bb4e353e9753278cfa059c660e0e3bc0273bdbe08024c231c96b3a" exitCode=0 Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.102664 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zngdv" event={"ID":"ea80841c-bb81-4bd4-a6b4-dde2e04b9351","Type":"ContainerDied","Data":"9107d51784bb4e353e9753278cfa059c660e0e3bc0273bdbe08024c231c96b3a"} Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.104703 5112 generic.go:358] "Generic (PLEG): container finished" podID="027b046b-01a1-48d8-a6b7-d03fd6509f1f" containerID="c5135dd242a8fdfa37be6f9408ab8de9691fc6e172272f13f9a2c91b1c554fce" exitCode=0 Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.104806 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqqtw" event={"ID":"027b046b-01a1-48d8-a6b7-d03fd6509f1f","Type":"ContainerDied","Data":"c5135dd242a8fdfa37be6f9408ab8de9691fc6e172272f13f9a2c91b1c554fce"} Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.106985 5112 generic.go:358] "Generic (PLEG): container finished" podID="0f6a3ac4-dcc2-4fbd-8699-d97127b35495" containerID="df139d1fa3324ea4905697862db3a041d7a4dfffb695bc78532818bc83b7d109" exitCode=0 Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.107056 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5llh9" event={"ID":"0f6a3ac4-dcc2-4fbd-8699-d97127b35495","Type":"ContainerDied","Data":"df139d1fa3324ea4905697862db3a041d7a4dfffb695bc78532818bc83b7d109"} Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.112671 5112 generic.go:358] "Generic (PLEG): container finished" podID="36b34f0a-51c8-41d9-a61c-dbc0104bea5d" containerID="fe0065443d2203b8506a3a316725cd4302de3bc871f18d4f8c46b63fcd9c3ff7" exitCode=0 Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.112758 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4p756" event={"ID":"36b34f0a-51c8-41d9-a61c-dbc0104bea5d","Type":"ContainerDied","Data":"fe0065443d2203b8506a3a316725cd4302de3bc871f18d4f8c46b63fcd9c3ff7"} Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.118766 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-phq66" podStartSLOduration=17.883107442 podStartE2EDuration="31.118746797s" podCreationTimestamp="2025-12-08 17:42:46 +0000 UTC" firstStartedPulling="2025-12-08 17:42:51.592757199 +0000 UTC m=+148.602305900" lastFinishedPulling="2025-12-08 17:43:04.828396544 +0000 UTC m=+161.837945255" observedRunningTime="2025-12-08 17:43:17.118031667 +0000 UTC m=+174.127580388" watchObservedRunningTime="2025-12-08 17:43:17.118746797 +0000 UTC m=+174.128295498" Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.155202 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f4flg" podStartSLOduration=15.148829691 podStartE2EDuration="33.155146496s" podCreationTimestamp="2025-12-08 17:42:44 +0000 UTC" firstStartedPulling="2025-12-08 17:42:46.822945083 +0000 UTC m=+143.832493784" lastFinishedPulling="2025-12-08 17:43:04.829261888 +0000 UTC m=+161.838810589" observedRunningTime="2025-12-08 17:43:17.152877285 +0000 UTC m=+174.162425986" watchObservedRunningTime="2025-12-08 17:43:17.155146496 +0000 UTC m=+174.164695197" Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.159397 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.902871 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-6gxxt_234e7e70-7bb6-457f-a170-f1349602c58a/kube-multus-additional-cni-plugins/0.log" Dec 08 17:43:17 crc kubenswrapper[5112]: I1208 17:43:17.903232 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.040999 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/234e7e70-7bb6-457f-a170-f1349602c58a-ready\") pod \"234e7e70-7bb6-457f-a170-f1349602c58a\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.041167 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/234e7e70-7bb6-457f-a170-f1349602c58a-cni-sysctl-allowlist\") pod \"234e7e70-7bb6-457f-a170-f1349602c58a\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.041254 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/234e7e70-7bb6-457f-a170-f1349602c58a-tuning-conf-dir\") pod \"234e7e70-7bb6-457f-a170-f1349602c58a\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.041289 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdf9f\" (UniqueName: \"kubernetes.io/projected/234e7e70-7bb6-457f-a170-f1349602c58a-kube-api-access-gdf9f\") pod \"234e7e70-7bb6-457f-a170-f1349602c58a\" (UID: \"234e7e70-7bb6-457f-a170-f1349602c58a\") " Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.041376 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/234e7e70-7bb6-457f-a170-f1349602c58a-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "234e7e70-7bb6-457f-a170-f1349602c58a" (UID: "234e7e70-7bb6-457f-a170-f1349602c58a"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.041749 5112 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/234e7e70-7bb6-457f-a170-f1349602c58a-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.041841 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/234e7e70-7bb6-457f-a170-f1349602c58a-ready" (OuterVolumeSpecName: "ready") pod "234e7e70-7bb6-457f-a170-f1349602c58a" (UID: "234e7e70-7bb6-457f-a170-f1349602c58a"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.041932 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/234e7e70-7bb6-457f-a170-f1349602c58a-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "234e7e70-7bb6-457f-a170-f1349602c58a" (UID: "234e7e70-7bb6-457f-a170-f1349602c58a"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.047204 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/234e7e70-7bb6-457f-a170-f1349602c58a-kube-api-access-gdf9f" (OuterVolumeSpecName: "kube-api-access-gdf9f") pod "234e7e70-7bb6-457f-a170-f1349602c58a" (UID: "234e7e70-7bb6-457f-a170-f1349602c58a"). InnerVolumeSpecName "kube-api-access-gdf9f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.119407 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsp8f" event={"ID":"fb826094-3e88-481d-bf22-ad5c3eb0f280","Type":"ContainerStarted","Data":"64134e231bd4a800161dc7ecb8ebf2e18a64304267a5c2f5c3888c413178f91e"} Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.120930 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-6gxxt_234e7e70-7bb6-457f-a170-f1349602c58a/kube-multus-additional-cni-plugins/0.log" Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.121095 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.121124 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-6gxxt" event={"ID":"234e7e70-7bb6-457f-a170-f1349602c58a","Type":"ContainerDied","Data":"df7b9197f4e5181b94ea6e3513730df2dcd58c18ce67ca21bf2302357e969ac7"} Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.121189 5112 scope.go:117] "RemoveContainer" containerID="6f5e811595ca12980ae3c34f2a35e64646b6c89adddcc9e33a8090bb47b71053" Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.142708 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gdf9f\" (UniqueName: \"kubernetes.io/projected/234e7e70-7bb6-457f-a170-f1349602c58a-kube-api-access-gdf9f\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.142745 5112 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/234e7e70-7bb6-457f-a170-f1349602c58a-ready\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.142755 5112 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/234e7e70-7bb6-457f-a170-f1349602c58a-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.153061 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-6gxxt"] Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.158884 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-6gxxt"] Dec 08 17:43:18 crc kubenswrapper[5112]: I1208 17:43:18.535927 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gsp8f" podStartSLOduration=17.270209146 podStartE2EDuration="31.535906427s" podCreationTimestamp="2025-12-08 17:42:47 +0000 UTC" firstStartedPulling="2025-12-08 17:42:50.562709624 +0000 UTC m=+147.572258325" lastFinishedPulling="2025-12-08 17:43:04.828406895 +0000 UTC m=+161.837955606" observedRunningTime="2025-12-08 17:43:18.53416287 +0000 UTC m=+175.543711591" watchObservedRunningTime="2025-12-08 17:43:18.535906427 +0000 UTC m=+175.545455128" Dec 08 17:43:19 crc kubenswrapper[5112]: I1208 17:43:19.104197 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:19 crc kubenswrapper[5112]: I1208 17:43:19.133518 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4p756" event={"ID":"36b34f0a-51c8-41d9-a61c-dbc0104bea5d","Type":"ContainerStarted","Data":"5bb55f30b0992808f92348dca60115c5aa9bbe4ad80da51e1dc8268c34faec77"} Dec 08 17:43:19 crc kubenswrapper[5112]: I1208 17:43:19.141006 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zngdv" event={"ID":"ea80841c-bb81-4bd4-a6b4-dde2e04b9351","Type":"ContainerStarted","Data":"57f9cc5a2d006bcbdadf8fc7757278396c8d920f679b5dc9b68f70b6f633515f"} Dec 08 17:43:19 crc kubenswrapper[5112]: I1208 17:43:19.273510 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4p756" podStartSLOduration=17.697600811 podStartE2EDuration="31.273485432s" podCreationTimestamp="2025-12-08 17:42:48 +0000 UTC" firstStartedPulling="2025-12-08 17:42:52.629944824 +0000 UTC m=+149.639493525" lastFinishedPulling="2025-12-08 17:43:06.205829405 +0000 UTC m=+163.215378146" observedRunningTime="2025-12-08 17:43:19.157321097 +0000 UTC m=+176.166869808" watchObservedRunningTime="2025-12-08 17:43:19.273485432 +0000 UTC m=+176.283034163" Dec 08 17:43:19 crc kubenswrapper[5112]: I1208 17:43:19.279352 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zngdv" podStartSLOduration=17.233883882 podStartE2EDuration="35.279325559s" podCreationTimestamp="2025-12-08 17:42:44 +0000 UTC" firstStartedPulling="2025-12-08 17:42:46.784065287 +0000 UTC m=+143.793613988" lastFinishedPulling="2025-12-08 17:43:04.829506964 +0000 UTC m=+161.839055665" observedRunningTime="2025-12-08 17:43:19.272738612 +0000 UTC m=+176.282287313" watchObservedRunningTime="2025-12-08 17:43:19.279325559 +0000 UTC m=+176.288874300" Dec 08 17:43:19 crc kubenswrapper[5112]: I1208 17:43:19.323450 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="234e7e70-7bb6-457f-a170-f1349602c58a" path="/var/lib/kubelet/pods/234e7e70-7bb6-457f-a170-f1349602c58a/volumes" Dec 08 17:43:19 crc kubenswrapper[5112]: I1208 17:43:19.397893 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:43:19 crc kubenswrapper[5112]: I1208 17:43:19.397970 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:43:19 crc kubenswrapper[5112]: I1208 17:43:19.491520 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvq22"] Dec 08 17:43:19 crc kubenswrapper[5112]: I1208 17:43:19.491814 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rvq22" podUID="e13583b7-7ad1-4129-8b1b-0ee32c5603df" containerName="registry-server" containerID="cri-o://21bb165ac8be11b715658d9094a501f7750f97da9b8e03b81a05f075c5638daa" gracePeriod=2 Dec 08 17:43:20 crc kubenswrapper[5112]: I1208 17:43:20.666394 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4p756" podUID="36b34f0a-51c8-41d9-a61c-dbc0104bea5d" containerName="registry-server" probeResult="failure" output=< Dec 08 17:43:20 crc kubenswrapper[5112]: timeout: failed to connect service ":50051" within 1s Dec 08 17:43:20 crc kubenswrapper[5112]: > Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.154725 5112 generic.go:358] "Generic (PLEG): container finished" podID="e13583b7-7ad1-4129-8b1b-0ee32c5603df" containerID="21bb165ac8be11b715658d9094a501f7750f97da9b8e03b81a05f075c5638daa" exitCode=0 Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.155039 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq22" event={"ID":"e13583b7-7ad1-4129-8b1b-0ee32c5603df","Type":"ContainerDied","Data":"21bb165ac8be11b715658d9094a501f7750f97da9b8e03b81a05f075c5638daa"} Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.158296 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5llh9" event={"ID":"0f6a3ac4-dcc2-4fbd-8699-d97127b35495","Type":"ContainerStarted","Data":"d361ee66d2f6d24d3a4f42cea5d791471273c0cfc1c5ac0d170db461f8e374e2"} Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.160531 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqqtw" event={"ID":"027b046b-01a1-48d8-a6b7-d03fd6509f1f","Type":"ContainerStarted","Data":"3cb26a24f82051a00896f32287574c87cb6cf6d6b8dbacd12f531681e6d7e87a"} Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.201496 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5llh9" podStartSLOduration=19.618899965 podStartE2EDuration="33.201477656s" podCreationTimestamp="2025-12-08 17:42:48 +0000 UTC" firstStartedPulling="2025-12-08 17:42:52.626699767 +0000 UTC m=+149.636248468" lastFinishedPulling="2025-12-08 17:43:06.209277438 +0000 UTC m=+163.218826159" observedRunningTime="2025-12-08 17:43:21.181707854 +0000 UTC m=+178.191256555" watchObservedRunningTime="2025-12-08 17:43:21.201477656 +0000 UTC m=+178.211026357" Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.203528 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qqqtw" podStartSLOduration=20.390408396 podStartE2EDuration="36.203516221s" podCreationTimestamp="2025-12-08 17:42:45 +0000 UTC" firstStartedPulling="2025-12-08 17:42:49.029175343 +0000 UTC m=+146.038724044" lastFinishedPulling="2025-12-08 17:43:04.842283168 +0000 UTC m=+161.851831869" observedRunningTime="2025-12-08 17:43:21.198601568 +0000 UTC m=+178.208150269" watchObservedRunningTime="2025-12-08 17:43:21.203516221 +0000 UTC m=+178.213064922" Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.325712 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.400438 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13583b7-7ad1-4129-8b1b-0ee32c5603df-utilities\") pod \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\" (UID: \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\") " Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.400526 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sgr4\" (UniqueName: \"kubernetes.io/projected/e13583b7-7ad1-4129-8b1b-0ee32c5603df-kube-api-access-6sgr4\") pod \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\" (UID: \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\") " Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.400580 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13583b7-7ad1-4129-8b1b-0ee32c5603df-catalog-content\") pod \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\" (UID: \"e13583b7-7ad1-4129-8b1b-0ee32c5603df\") " Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.401285 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13583b7-7ad1-4129-8b1b-0ee32c5603df-utilities" (OuterVolumeSpecName: "utilities") pod "e13583b7-7ad1-4129-8b1b-0ee32c5603df" (UID: "e13583b7-7ad1-4129-8b1b-0ee32c5603df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.414275 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13583b7-7ad1-4129-8b1b-0ee32c5603df-kube-api-access-6sgr4" (OuterVolumeSpecName: "kube-api-access-6sgr4") pod "e13583b7-7ad1-4129-8b1b-0ee32c5603df" (UID: "e13583b7-7ad1-4129-8b1b-0ee32c5603df"). InnerVolumeSpecName "kube-api-access-6sgr4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.436585 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e13583b7-7ad1-4129-8b1b-0ee32c5603df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e13583b7-7ad1-4129-8b1b-0ee32c5603df" (UID: "e13583b7-7ad1-4129-8b1b-0ee32c5603df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.502628 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e13583b7-7ad1-4129-8b1b-0ee32c5603df-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.502670 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e13583b7-7ad1-4129-8b1b-0ee32c5603df-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:21 crc kubenswrapper[5112]: I1208 17:43:21.502686 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6sgr4\" (UniqueName: \"kubernetes.io/projected/e13583b7-7ad1-4129-8b1b-0ee32c5603df-kube-api-access-6sgr4\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.174568 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvq22" event={"ID":"e13583b7-7ad1-4129-8b1b-0ee32c5603df","Type":"ContainerDied","Data":"2fec56c15e1c85c85c8c0638b3d8bae70d14ac9c1842c805d9681197bc407837"} Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.174852 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvq22" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.175037 5112 scope.go:117] "RemoveContainer" containerID="21bb165ac8be11b715658d9094a501f7750f97da9b8e03b81a05f075c5638daa" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.209027 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvq22"] Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.214111 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rvq22"] Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.214880 5112 scope.go:117] "RemoveContainer" containerID="66892dceb412ecd7803975e277354d1bcc173a31c832a44bf99fb32588a17b57" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.216286 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.216840 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e13583b7-7ad1-4129-8b1b-0ee32c5603df" containerName="registry-server" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.216857 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13583b7-7ad1-4129-8b1b-0ee32c5603df" containerName="registry-server" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.216875 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="234e7e70-7bb6-457f-a170-f1349602c58a" containerName="kube-multus-additional-cni-plugins" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.216882 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="234e7e70-7bb6-457f-a170-f1349602c58a" containerName="kube-multus-additional-cni-plugins" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.216889 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8fcd7359-28db-4b18-8d86-eb663b9a3807" containerName="pruner" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.216894 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fcd7359-28db-4b18-8d86-eb663b9a3807" containerName="pruner" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.216910 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e13583b7-7ad1-4129-8b1b-0ee32c5603df" containerName="extract-content" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.216915 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13583b7-7ad1-4129-8b1b-0ee32c5603df" containerName="extract-content" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.216933 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e13583b7-7ad1-4129-8b1b-0ee32c5603df" containerName="extract-utilities" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.216938 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13583b7-7ad1-4129-8b1b-0ee32c5603df" containerName="extract-utilities" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.216948 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bdcf563-b973-48ef-8c03-dbc3dc5eed6b" containerName="pruner" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.216953 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bdcf563-b973-48ef-8c03-dbc3dc5eed6b" containerName="pruner" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.217058 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="9bdcf563-b973-48ef-8c03-dbc3dc5eed6b" containerName="pruner" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.217068 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="8fcd7359-28db-4b18-8d86-eb663b9a3807" containerName="pruner" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.217091 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="e13583b7-7ad1-4129-8b1b-0ee32c5603df" containerName="registry-server" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.217101 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="234e7e70-7bb6-457f-a170-f1349602c58a" containerName="kube-multus-additional-cni-plugins" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.227132 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.227145 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.231969 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.232269 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.236017 5112 scope.go:117] "RemoveContainer" containerID="7d9c3b55d77046972d1186cc63cf811faadd28d8b6c5a511b7567603b513b6c9" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.312037 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3e1b9b-c120-481e-87d6-251ed46b7af3-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"0d3e1b9b-c120-481e-87d6-251ed46b7af3\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.312108 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d3e1b9b-c120-481e-87d6-251ed46b7af3-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"0d3e1b9b-c120-481e-87d6-251ed46b7af3\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.413267 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3e1b9b-c120-481e-87d6-251ed46b7af3-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"0d3e1b9b-c120-481e-87d6-251ed46b7af3\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.413334 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d3e1b9b-c120-481e-87d6-251ed46b7af3-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"0d3e1b9b-c120-481e-87d6-251ed46b7af3\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.413455 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d3e1b9b-c120-481e-87d6-251ed46b7af3-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"0d3e1b9b-c120-481e-87d6-251ed46b7af3\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.449904 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3e1b9b-c120-481e-87d6-251ed46b7af3-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"0d3e1b9b-c120-481e-87d6-251ed46b7af3\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:22 crc kubenswrapper[5112]: I1208 17:43:22.575061 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:23 crc kubenswrapper[5112]: I1208 17:43:23.148890 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 17:43:23 crc kubenswrapper[5112]: W1208 17:43:23.157868 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0d3e1b9b_c120_481e_87d6_251ed46b7af3.slice/crio-74c8f3b224dd92abb99365aa33c6c60993eeb05a84d15798f1df67e6a3489998 WatchSource:0}: Error finding container 74c8f3b224dd92abb99365aa33c6c60993eeb05a84d15798f1df67e6a3489998: Status 404 returned error can't find the container with id 74c8f3b224dd92abb99365aa33c6c60993eeb05a84d15798f1df67e6a3489998 Dec 08 17:43:23 crc kubenswrapper[5112]: I1208 17:43:23.181431 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"0d3e1b9b-c120-481e-87d6-251ed46b7af3","Type":"ContainerStarted","Data":"74c8f3b224dd92abb99365aa33c6c60993eeb05a84d15798f1df67e6a3489998"} Dec 08 17:43:23 crc kubenswrapper[5112]: I1208 17:43:23.341534 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e13583b7-7ad1-4129-8b1b-0ee32c5603df" path="/var/lib/kubelet/pods/e13583b7-7ad1-4129-8b1b-0ee32c5603df/volumes" Dec 08 17:43:24 crc kubenswrapper[5112]: I1208 17:43:24.188582 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"0d3e1b9b-c120-481e-87d6-251ed46b7af3","Type":"ContainerStarted","Data":"7335d62faa6842ee47a1484c44cc6cfa85bfa0e466879970b32a32e7a643a5ae"} Dec 08 17:43:24 crc kubenswrapper[5112]: I1208 17:43:24.208713 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=2.208691487 podStartE2EDuration="2.208691487s" podCreationTimestamp="2025-12-08 17:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:24.204641618 +0000 UTC m=+181.214190319" watchObservedRunningTime="2025-12-08 17:43:24.208691487 +0000 UTC m=+181.218240188" Dec 08 17:43:24 crc kubenswrapper[5112]: E1208 17:43:24.438805 5112 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod0d3e1b9b_c120_481e_87d6_251ed46b7af3.slice/crio-7335d62faa6842ee47a1484c44cc6cfa85bfa0e466879970b32a32e7a643a5ae.scope\": RecentStats: unable to find data in memory cache]" Dec 08 17:43:25 crc kubenswrapper[5112]: I1208 17:43:25.064333 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:43:25 crc kubenswrapper[5112]: I1208 17:43:25.064719 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:43:25 crc kubenswrapper[5112]: I1208 17:43:25.065515 5112 ???:1] "http: TLS handshake error from 192.168.126.11:49892: no serving certificate available for the kubelet" Dec 08 17:43:25 crc kubenswrapper[5112]: I1208 17:43:25.109668 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:43:25 crc kubenswrapper[5112]: I1208 17:43:25.194990 5112 generic.go:358] "Generic (PLEG): container finished" podID="0d3e1b9b-c120-481e-87d6-251ed46b7af3" containerID="7335d62faa6842ee47a1484c44cc6cfa85bfa0e466879970b32a32e7a643a5ae" exitCode=0 Dec 08 17:43:25 crc kubenswrapper[5112]: I1208 17:43:25.195936 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"0d3e1b9b-c120-481e-87d6-251ed46b7af3","Type":"ContainerDied","Data":"7335d62faa6842ee47a1484c44cc6cfa85bfa0e466879970b32a32e7a643a5ae"} Dec 08 17:43:25 crc kubenswrapper[5112]: I1208 17:43:25.235762 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:43:25 crc kubenswrapper[5112]: I1208 17:43:25.301750 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:43:25 crc kubenswrapper[5112]: I1208 17:43:25.301810 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:43:25 crc kubenswrapper[5112]: I1208 17:43:25.341236 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:43:25 crc kubenswrapper[5112]: I1208 17:43:25.744429 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:43:25 crc kubenswrapper[5112]: I1208 17:43:25.745295 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:43:25 crc kubenswrapper[5112]: I1208 17:43:25.816377 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.242973 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.244036 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.467261 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.517124 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3e1b9b-c120-481e-87d6-251ed46b7af3-kube-api-access\") pod \"0d3e1b9b-c120-481e-87d6-251ed46b7af3\" (UID: \"0d3e1b9b-c120-481e-87d6-251ed46b7af3\") " Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.517332 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d3e1b9b-c120-481e-87d6-251ed46b7af3-kubelet-dir\") pod \"0d3e1b9b-c120-481e-87d6-251ed46b7af3\" (UID: \"0d3e1b9b-c120-481e-87d6-251ed46b7af3\") " Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.517457 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d3e1b9b-c120-481e-87d6-251ed46b7af3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0d3e1b9b-c120-481e-87d6-251ed46b7af3" (UID: "0d3e1b9b-c120-481e-87d6-251ed46b7af3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.518062 5112 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d3e1b9b-c120-481e-87d6-251ed46b7af3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.524913 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3e1b9b-c120-481e-87d6-251ed46b7af3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0d3e1b9b-c120-481e-87d6-251ed46b7af3" (UID: "0d3e1b9b-c120-481e-87d6-251ed46b7af3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.619169 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3e1b9b-c120-481e-87d6-251ed46b7af3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.810931 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.811826 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0d3e1b9b-c120-481e-87d6-251ed46b7af3" containerName="pruner" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.811851 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3e1b9b-c120-481e-87d6-251ed46b7af3" containerName="pruner" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.812061 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="0d3e1b9b-c120-481e-87d6-251ed46b7af3" containerName="pruner" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.821833 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.822017 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.922363 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec521621-eb82-4b99-bd04-c1256bd46f3d-var-lock\") pod \"installer-12-crc\" (UID: \"ec521621-eb82-4b99-bd04-c1256bd46f3d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.922487 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec521621-eb82-4b99-bd04-c1256bd46f3d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"ec521621-eb82-4b99-bd04-c1256bd46f3d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:26 crc kubenswrapper[5112]: I1208 17:43:26.922575 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec521621-eb82-4b99-bd04-c1256bd46f3d-kube-api-access\") pod \"installer-12-crc\" (UID: \"ec521621-eb82-4b99-bd04-c1256bd46f3d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.024222 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec521621-eb82-4b99-bd04-c1256bd46f3d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"ec521621-eb82-4b99-bd04-c1256bd46f3d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.024308 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec521621-eb82-4b99-bd04-c1256bd46f3d-kube-api-access\") pod \"installer-12-crc\" (UID: \"ec521621-eb82-4b99-bd04-c1256bd46f3d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.024415 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec521621-eb82-4b99-bd04-c1256bd46f3d-var-lock\") pod \"installer-12-crc\" (UID: \"ec521621-eb82-4b99-bd04-c1256bd46f3d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.024519 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec521621-eb82-4b99-bd04-c1256bd46f3d-var-lock\") pod \"installer-12-crc\" (UID: \"ec521621-eb82-4b99-bd04-c1256bd46f3d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.024578 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec521621-eb82-4b99-bd04-c1256bd46f3d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"ec521621-eb82-4b99-bd04-c1256bd46f3d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.041697 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec521621-eb82-4b99-bd04-c1256bd46f3d-kube-api-access\") pod \"installer-12-crc\" (UID: \"ec521621-eb82-4b99-bd04-c1256bd46f3d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.080266 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.081993 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.126470 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.152636 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.207731 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.207750 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"0d3e1b9b-c120-481e-87d6-251ed46b7af3","Type":"ContainerDied","Data":"74c8f3b224dd92abb99365aa33c6c60993eeb05a84d15798f1df67e6a3489998"} Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.207789 5112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74c8f3b224dd92abb99365aa33c6c60993eeb05a84d15798f1df67e6a3489998" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.261701 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.353369 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 17:43:27 crc kubenswrapper[5112]: W1208 17:43:27.361479 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podec521621_eb82_4b99_bd04_c1256bd46f3d.slice/crio-5aee8da2acbae57a70ae21b4f3110fe8c2d77d9a818048f9e609b77f57fa4371 WatchSource:0}: Error finding container 5aee8da2acbae57a70ae21b4f3110fe8c2d77d9a818048f9e609b77f57fa4371: Status 404 returned error can't find the container with id 5aee8da2acbae57a70ae21b4f3110fe8c2d77d9a818048f9e609b77f57fa4371 Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.468999 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.469067 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.523105 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:43:27 crc kubenswrapper[5112]: I1208 17:43:27.889812 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qqqtw"] Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.214137 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"ec521621-eb82-4b99-bd04-c1256bd46f3d","Type":"ContainerStarted","Data":"801af624bfb20fb357bc073ac09b2ad9e88d500474d82e7f3af4d494208670f0"} Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.214459 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"ec521621-eb82-4b99-bd04-c1256bd46f3d","Type":"ContainerStarted","Data":"5aee8da2acbae57a70ae21b4f3110fe8c2d77d9a818048f9e609b77f57fa4371"} Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.214519 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qqqtw" podUID="027b046b-01a1-48d8-a6b7-d03fd6509f1f" containerName="registry-server" containerID="cri-o://3cb26a24f82051a00896f32287574c87cb6cf6d6b8dbacd12f531681e6d7e87a" gracePeriod=2 Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.232693 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=2.232675055 podStartE2EDuration="2.232675055s" podCreationTimestamp="2025-12-08 17:43:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:28.228483032 +0000 UTC m=+185.238031743" watchObservedRunningTime="2025-12-08 17:43:28.232675055 +0000 UTC m=+185.242223746" Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.267231 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.591158 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.646496 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpfw5\" (UniqueName: \"kubernetes.io/projected/027b046b-01a1-48d8-a6b7-d03fd6509f1f-kube-api-access-dpfw5\") pod \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\" (UID: \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\") " Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.646622 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027b046b-01a1-48d8-a6b7-d03fd6509f1f-utilities\") pod \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\" (UID: \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\") " Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.646697 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027b046b-01a1-48d8-a6b7-d03fd6509f1f-catalog-content\") pod \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\" (UID: \"027b046b-01a1-48d8-a6b7-d03fd6509f1f\") " Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.647715 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/027b046b-01a1-48d8-a6b7-d03fd6509f1f-utilities" (OuterVolumeSpecName: "utilities") pod "027b046b-01a1-48d8-a6b7-d03fd6509f1f" (UID: "027b046b-01a1-48d8-a6b7-d03fd6509f1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.656255 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/027b046b-01a1-48d8-a6b7-d03fd6509f1f-kube-api-access-dpfw5" (OuterVolumeSpecName: "kube-api-access-dpfw5") pod "027b046b-01a1-48d8-a6b7-d03fd6509f1f" (UID: "027b046b-01a1-48d8-a6b7-d03fd6509f1f"). InnerVolumeSpecName "kube-api-access-dpfw5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.698413 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/027b046b-01a1-48d8-a6b7-d03fd6509f1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "027b046b-01a1-48d8-a6b7-d03fd6509f1f" (UID: "027b046b-01a1-48d8-a6b7-d03fd6509f1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.747959 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027b046b-01a1-48d8-a6b7-d03fd6509f1f-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.747988 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027b046b-01a1-48d8-a6b7-d03fd6509f1f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:28 crc kubenswrapper[5112]: I1208 17:43:28.747998 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dpfw5\" (UniqueName: \"kubernetes.io/projected/027b046b-01a1-48d8-a6b7-d03fd6509f1f-kube-api-access-dpfw5\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.222039 5112 generic.go:358] "Generic (PLEG): container finished" podID="027b046b-01a1-48d8-a6b7-d03fd6509f1f" containerID="3cb26a24f82051a00896f32287574c87cb6cf6d6b8dbacd12f531681e6d7e87a" exitCode=0 Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.222172 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqqtw" Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.222152 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqqtw" event={"ID":"027b046b-01a1-48d8-a6b7-d03fd6509f1f","Type":"ContainerDied","Data":"3cb26a24f82051a00896f32287574c87cb6cf6d6b8dbacd12f531681e6d7e87a"} Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.222835 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqqtw" event={"ID":"027b046b-01a1-48d8-a6b7-d03fd6509f1f","Type":"ContainerDied","Data":"bcb9a0d4c4d5583f28725e1ef0cf51fbdeb0097cc05eaff0f229b0c62615d120"} Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.222933 5112 scope.go:117] "RemoveContainer" containerID="3cb26a24f82051a00896f32287574c87cb6cf6d6b8dbacd12f531681e6d7e87a" Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.247799 5112 scope.go:117] "RemoveContainer" containerID="c5135dd242a8fdfa37be6f9408ab8de9691fc6e172272f13f9a2c91b1c554fce" Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.261739 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qqqtw"] Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.265961 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qqqtw"] Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.275171 5112 scope.go:117] "RemoveContainer" containerID="a67ab10e6551881f65db43d5dabc235689344644f4ddfea17b4657e6adb854c2" Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.305318 5112 scope.go:117] "RemoveContainer" containerID="3cb26a24f82051a00896f32287574c87cb6cf6d6b8dbacd12f531681e6d7e87a" Dec 08 17:43:29 crc kubenswrapper[5112]: E1208 17:43:29.305646 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cb26a24f82051a00896f32287574c87cb6cf6d6b8dbacd12f531681e6d7e87a\": container with ID starting with 3cb26a24f82051a00896f32287574c87cb6cf6d6b8dbacd12f531681e6d7e87a not found: ID does not exist" containerID="3cb26a24f82051a00896f32287574c87cb6cf6d6b8dbacd12f531681e6d7e87a" Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.305684 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cb26a24f82051a00896f32287574c87cb6cf6d6b8dbacd12f531681e6d7e87a"} err="failed to get container status \"3cb26a24f82051a00896f32287574c87cb6cf6d6b8dbacd12f531681e6d7e87a\": rpc error: code = NotFound desc = could not find container \"3cb26a24f82051a00896f32287574c87cb6cf6d6b8dbacd12f531681e6d7e87a\": container with ID starting with 3cb26a24f82051a00896f32287574c87cb6cf6d6b8dbacd12f531681e6d7e87a not found: ID does not exist" Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.305731 5112 scope.go:117] "RemoveContainer" containerID="c5135dd242a8fdfa37be6f9408ab8de9691fc6e172272f13f9a2c91b1c554fce" Dec 08 17:43:29 crc kubenswrapper[5112]: E1208 17:43:29.306100 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5135dd242a8fdfa37be6f9408ab8de9691fc6e172272f13f9a2c91b1c554fce\": container with ID starting with c5135dd242a8fdfa37be6f9408ab8de9691fc6e172272f13f9a2c91b1c554fce not found: ID does not exist" containerID="c5135dd242a8fdfa37be6f9408ab8de9691fc6e172272f13f9a2c91b1c554fce" Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.306150 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5135dd242a8fdfa37be6f9408ab8de9691fc6e172272f13f9a2c91b1c554fce"} err="failed to get container status \"c5135dd242a8fdfa37be6f9408ab8de9691fc6e172272f13f9a2c91b1c554fce\": rpc error: code = NotFound desc = could not find container \"c5135dd242a8fdfa37be6f9408ab8de9691fc6e172272f13f9a2c91b1c554fce\": container with ID starting with c5135dd242a8fdfa37be6f9408ab8de9691fc6e172272f13f9a2c91b1c554fce not found: ID does not exist" Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.306193 5112 scope.go:117] "RemoveContainer" containerID="a67ab10e6551881f65db43d5dabc235689344644f4ddfea17b4657e6adb854c2" Dec 08 17:43:29 crc kubenswrapper[5112]: E1208 17:43:29.306462 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a67ab10e6551881f65db43d5dabc235689344644f4ddfea17b4657e6adb854c2\": container with ID starting with a67ab10e6551881f65db43d5dabc235689344644f4ddfea17b4657e6adb854c2 not found: ID does not exist" containerID="a67ab10e6551881f65db43d5dabc235689344644f4ddfea17b4657e6adb854c2" Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.306497 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a67ab10e6551881f65db43d5dabc235689344644f4ddfea17b4657e6adb854c2"} err="failed to get container status \"a67ab10e6551881f65db43d5dabc235689344644f4ddfea17b4657e6adb854c2\": rpc error: code = NotFound desc = could not find container \"a67ab10e6551881f65db43d5dabc235689344644f4ddfea17b4657e6adb854c2\": container with ID starting with a67ab10e6551881f65db43d5dabc235689344644f4ddfea17b4657e6adb854c2 not found: ID does not exist" Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.323115 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="027b046b-01a1-48d8-a6b7-d03fd6509f1f" path="/var/lib/kubelet/pods/027b046b-01a1-48d8-a6b7-d03fd6509f1f/volumes" Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.433478 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:43:29 crc kubenswrapper[5112]: I1208 17:43:29.473658 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.289812 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsp8f"] Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.290150 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gsp8f" podUID="fb826094-3e88-481d-bf22-ad5c3eb0f280" containerName="registry-server" containerID="cri-o://64134e231bd4a800161dc7ecb8ebf2e18a64304267a5c2f5c3888c413178f91e" gracePeriod=2 Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.681633 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.780704 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hd6p\" (UniqueName: \"kubernetes.io/projected/fb826094-3e88-481d-bf22-ad5c3eb0f280-kube-api-access-2hd6p\") pod \"fb826094-3e88-481d-bf22-ad5c3eb0f280\" (UID: \"fb826094-3e88-481d-bf22-ad5c3eb0f280\") " Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.781410 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb826094-3e88-481d-bf22-ad5c3eb0f280-utilities\") pod \"fb826094-3e88-481d-bf22-ad5c3eb0f280\" (UID: \"fb826094-3e88-481d-bf22-ad5c3eb0f280\") " Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.781473 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb826094-3e88-481d-bf22-ad5c3eb0f280-catalog-content\") pod \"fb826094-3e88-481d-bf22-ad5c3eb0f280\" (UID: \"fb826094-3e88-481d-bf22-ad5c3eb0f280\") " Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.782296 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb826094-3e88-481d-bf22-ad5c3eb0f280-utilities" (OuterVolumeSpecName: "utilities") pod "fb826094-3e88-481d-bf22-ad5c3eb0f280" (UID: "fb826094-3e88-481d-bf22-ad5c3eb0f280"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.786461 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb826094-3e88-481d-bf22-ad5c3eb0f280-kube-api-access-2hd6p" (OuterVolumeSpecName: "kube-api-access-2hd6p") pod "fb826094-3e88-481d-bf22-ad5c3eb0f280" (UID: "fb826094-3e88-481d-bf22-ad5c3eb0f280"). InnerVolumeSpecName "kube-api-access-2hd6p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.790305 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb826094-3e88-481d-bf22-ad5c3eb0f280-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb826094-3e88-481d-bf22-ad5c3eb0f280" (UID: "fb826094-3e88-481d-bf22-ad5c3eb0f280"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.882668 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2hd6p\" (UniqueName: \"kubernetes.io/projected/fb826094-3e88-481d-bf22-ad5c3eb0f280-kube-api-access-2hd6p\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.882720 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb826094-3e88-481d-bf22-ad5c3eb0f280-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.882738 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb826094-3e88-481d-bf22-ad5c3eb0f280-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.890315 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.890530 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:43:30 crc kubenswrapper[5112]: I1208 17:43:30.945304 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.241076 5112 generic.go:358] "Generic (PLEG): container finished" podID="fb826094-3e88-481d-bf22-ad5c3eb0f280" containerID="64134e231bd4a800161dc7ecb8ebf2e18a64304267a5c2f5c3888c413178f91e" exitCode=0 Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.242632 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gsp8f" Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.248728 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsp8f" event={"ID":"fb826094-3e88-481d-bf22-ad5c3eb0f280","Type":"ContainerDied","Data":"64134e231bd4a800161dc7ecb8ebf2e18a64304267a5c2f5c3888c413178f91e"} Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.248821 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gsp8f" event={"ID":"fb826094-3e88-481d-bf22-ad5c3eb0f280","Type":"ContainerDied","Data":"b1fa86a6340863b5d8576d5daf79440e06cad82dd9f4e779bee46e29f6030b27"} Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.248854 5112 scope.go:117] "RemoveContainer" containerID="64134e231bd4a800161dc7ecb8ebf2e18a64304267a5c2f5c3888c413178f91e" Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.268882 5112 scope.go:117] "RemoveContainer" containerID="563195da28fa066f9e7f3c39b2ada339260ed92161fee935978b5cf279776ff6" Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.287578 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsp8f"] Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.292622 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gsp8f"] Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.294468 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.326343 5112 scope.go:117] "RemoveContainer" containerID="2b10d326d476f9bd22357ec4d5827249a676c88a3e41665499d4bfe746867c32" Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.330176 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb826094-3e88-481d-bf22-ad5c3eb0f280" path="/var/lib/kubelet/pods/fb826094-3e88-481d-bf22-ad5c3eb0f280/volumes" Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.344165 5112 scope.go:117] "RemoveContainer" containerID="64134e231bd4a800161dc7ecb8ebf2e18a64304267a5c2f5c3888c413178f91e" Dec 08 17:43:31 crc kubenswrapper[5112]: E1208 17:43:31.344510 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64134e231bd4a800161dc7ecb8ebf2e18a64304267a5c2f5c3888c413178f91e\": container with ID starting with 64134e231bd4a800161dc7ecb8ebf2e18a64304267a5c2f5c3888c413178f91e not found: ID does not exist" containerID="64134e231bd4a800161dc7ecb8ebf2e18a64304267a5c2f5c3888c413178f91e" Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.344544 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64134e231bd4a800161dc7ecb8ebf2e18a64304267a5c2f5c3888c413178f91e"} err="failed to get container status \"64134e231bd4a800161dc7ecb8ebf2e18a64304267a5c2f5c3888c413178f91e\": rpc error: code = NotFound desc = could not find container \"64134e231bd4a800161dc7ecb8ebf2e18a64304267a5c2f5c3888c413178f91e\": container with ID starting with 64134e231bd4a800161dc7ecb8ebf2e18a64304267a5c2f5c3888c413178f91e not found: ID does not exist" Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.344563 5112 scope.go:117] "RemoveContainer" containerID="563195da28fa066f9e7f3c39b2ada339260ed92161fee935978b5cf279776ff6" Dec 08 17:43:31 crc kubenswrapper[5112]: E1208 17:43:31.344739 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"563195da28fa066f9e7f3c39b2ada339260ed92161fee935978b5cf279776ff6\": container with ID starting with 563195da28fa066f9e7f3c39b2ada339260ed92161fee935978b5cf279776ff6 not found: ID does not exist" containerID="563195da28fa066f9e7f3c39b2ada339260ed92161fee935978b5cf279776ff6" Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.344755 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"563195da28fa066f9e7f3c39b2ada339260ed92161fee935978b5cf279776ff6"} err="failed to get container status \"563195da28fa066f9e7f3c39b2ada339260ed92161fee935978b5cf279776ff6\": rpc error: code = NotFound desc = could not find container \"563195da28fa066f9e7f3c39b2ada339260ed92161fee935978b5cf279776ff6\": container with ID starting with 563195da28fa066f9e7f3c39b2ada339260ed92161fee935978b5cf279776ff6 not found: ID does not exist" Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.344766 5112 scope.go:117] "RemoveContainer" containerID="2b10d326d476f9bd22357ec4d5827249a676c88a3e41665499d4bfe746867c32" Dec 08 17:43:31 crc kubenswrapper[5112]: E1208 17:43:31.344973 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b10d326d476f9bd22357ec4d5827249a676c88a3e41665499d4bfe746867c32\": container with ID starting with 2b10d326d476f9bd22357ec4d5827249a676c88a3e41665499d4bfe746867c32 not found: ID does not exist" containerID="2b10d326d476f9bd22357ec4d5827249a676c88a3e41665499d4bfe746867c32" Dec 08 17:43:31 crc kubenswrapper[5112]: I1208 17:43:31.344997 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b10d326d476f9bd22357ec4d5827249a676c88a3e41665499d4bfe746867c32"} err="failed to get container status \"2b10d326d476f9bd22357ec4d5827249a676c88a3e41665499d4bfe746867c32\": rpc error: code = NotFound desc = could not find container \"2b10d326d476f9bd22357ec4d5827249a676c88a3e41665499d4bfe746867c32\": container with ID starting with 2b10d326d476f9bd22357ec4d5827249a676c88a3e41665499d4bfe746867c32 not found: ID does not exist" Dec 08 17:43:32 crc kubenswrapper[5112]: I1208 17:43:32.690022 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5llh9"] Dec 08 17:43:34 crc kubenswrapper[5112]: I1208 17:43:34.258453 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5llh9" podUID="0f6a3ac4-dcc2-4fbd-8699-d97127b35495" containerName="registry-server" containerID="cri-o://d361ee66d2f6d24d3a4f42cea5d791471273c0cfc1c5ac0d170db461f8e374e2" gracePeriod=2 Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.103497 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.234974 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-utilities\") pod \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\" (UID: \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\") " Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.235008 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-catalog-content\") pod \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\" (UID: \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\") " Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.235048 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpw5l\" (UniqueName: \"kubernetes.io/projected/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-kube-api-access-xpw5l\") pod \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\" (UID: \"0f6a3ac4-dcc2-4fbd-8699-d97127b35495\") " Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.236066 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-utilities" (OuterVolumeSpecName: "utilities") pod "0f6a3ac4-dcc2-4fbd-8699-d97127b35495" (UID: "0f6a3ac4-dcc2-4fbd-8699-d97127b35495"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.244608 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-kube-api-access-xpw5l" (OuterVolumeSpecName: "kube-api-access-xpw5l") pod "0f6a3ac4-dcc2-4fbd-8699-d97127b35495" (UID: "0f6a3ac4-dcc2-4fbd-8699-d97127b35495"). InnerVolumeSpecName "kube-api-access-xpw5l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.267283 5112 generic.go:358] "Generic (PLEG): container finished" podID="0f6a3ac4-dcc2-4fbd-8699-d97127b35495" containerID="d361ee66d2f6d24d3a4f42cea5d791471273c0cfc1c5ac0d170db461f8e374e2" exitCode=0 Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.267422 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5llh9" event={"ID":"0f6a3ac4-dcc2-4fbd-8699-d97127b35495","Type":"ContainerDied","Data":"d361ee66d2f6d24d3a4f42cea5d791471273c0cfc1c5ac0d170db461f8e374e2"} Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.267452 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5llh9" event={"ID":"0f6a3ac4-dcc2-4fbd-8699-d97127b35495","Type":"ContainerDied","Data":"1562d65c93d960de7fe1da3242cd9bbe172cd9b1c25bd80fa8fd413d7f99b2ef"} Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.267473 5112 scope.go:117] "RemoveContainer" containerID="d361ee66d2f6d24d3a4f42cea5d791471273c0cfc1c5ac0d170db461f8e374e2" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.267633 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5llh9" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.285974 5112 scope.go:117] "RemoveContainer" containerID="df139d1fa3324ea4905697862db3a041d7a4dfffb695bc78532818bc83b7d109" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.311216 5112 scope.go:117] "RemoveContainer" containerID="32028fc43196a83f96b2f1130e2802dee4481cc57b3177b9eec338667f9c0110" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.335437 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f6a3ac4-dcc2-4fbd-8699-d97127b35495" (UID: "0f6a3ac4-dcc2-4fbd-8699-d97127b35495"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.336487 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.336531 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.336543 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xpw5l\" (UniqueName: \"kubernetes.io/projected/0f6a3ac4-dcc2-4fbd-8699-d97127b35495-kube-api-access-xpw5l\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.349984 5112 scope.go:117] "RemoveContainer" containerID="d361ee66d2f6d24d3a4f42cea5d791471273c0cfc1c5ac0d170db461f8e374e2" Dec 08 17:43:35 crc kubenswrapper[5112]: E1208 17:43:35.350526 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d361ee66d2f6d24d3a4f42cea5d791471273c0cfc1c5ac0d170db461f8e374e2\": container with ID starting with d361ee66d2f6d24d3a4f42cea5d791471273c0cfc1c5ac0d170db461f8e374e2 not found: ID does not exist" containerID="d361ee66d2f6d24d3a4f42cea5d791471273c0cfc1c5ac0d170db461f8e374e2" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.350580 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d361ee66d2f6d24d3a4f42cea5d791471273c0cfc1c5ac0d170db461f8e374e2"} err="failed to get container status \"d361ee66d2f6d24d3a4f42cea5d791471273c0cfc1c5ac0d170db461f8e374e2\": rpc error: code = NotFound desc = could not find container \"d361ee66d2f6d24d3a4f42cea5d791471273c0cfc1c5ac0d170db461f8e374e2\": container with ID starting with d361ee66d2f6d24d3a4f42cea5d791471273c0cfc1c5ac0d170db461f8e374e2 not found: ID does not exist" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.350616 5112 scope.go:117] "RemoveContainer" containerID="df139d1fa3324ea4905697862db3a041d7a4dfffb695bc78532818bc83b7d109" Dec 08 17:43:35 crc kubenswrapper[5112]: E1208 17:43:35.351120 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df139d1fa3324ea4905697862db3a041d7a4dfffb695bc78532818bc83b7d109\": container with ID starting with df139d1fa3324ea4905697862db3a041d7a4dfffb695bc78532818bc83b7d109 not found: ID does not exist" containerID="df139d1fa3324ea4905697862db3a041d7a4dfffb695bc78532818bc83b7d109" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.351191 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df139d1fa3324ea4905697862db3a041d7a4dfffb695bc78532818bc83b7d109"} err="failed to get container status \"df139d1fa3324ea4905697862db3a041d7a4dfffb695bc78532818bc83b7d109\": rpc error: code = NotFound desc = could not find container \"df139d1fa3324ea4905697862db3a041d7a4dfffb695bc78532818bc83b7d109\": container with ID starting with df139d1fa3324ea4905697862db3a041d7a4dfffb695bc78532818bc83b7d109 not found: ID does not exist" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.351212 5112 scope.go:117] "RemoveContainer" containerID="32028fc43196a83f96b2f1130e2802dee4481cc57b3177b9eec338667f9c0110" Dec 08 17:43:35 crc kubenswrapper[5112]: E1208 17:43:35.351495 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32028fc43196a83f96b2f1130e2802dee4481cc57b3177b9eec338667f9c0110\": container with ID starting with 32028fc43196a83f96b2f1130e2802dee4481cc57b3177b9eec338667f9c0110 not found: ID does not exist" containerID="32028fc43196a83f96b2f1130e2802dee4481cc57b3177b9eec338667f9c0110" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.351539 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32028fc43196a83f96b2f1130e2802dee4481cc57b3177b9eec338667f9c0110"} err="failed to get container status \"32028fc43196a83f96b2f1130e2802dee4481cc57b3177b9eec338667f9c0110\": rpc error: code = NotFound desc = could not find container \"32028fc43196a83f96b2f1130e2802dee4481cc57b3177b9eec338667f9c0110\": container with ID starting with 32028fc43196a83f96b2f1130e2802dee4481cc57b3177b9eec338667f9c0110 not found: ID does not exist" Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.593383 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5llh9"] Dec 08 17:43:35 crc kubenswrapper[5112]: I1208 17:43:35.597683 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5llh9"] Dec 08 17:43:36 crc kubenswrapper[5112]: I1208 17:43:36.310152 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-74dth"] Dec 08 17:43:37 crc kubenswrapper[5112]: I1208 17:43:37.322555 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f6a3ac4-dcc2-4fbd-8699-d97127b35495" path="/var/lib/kubelet/pods/0f6a3ac4-dcc2-4fbd-8699-d97127b35495/volumes" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.343425 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-74dth" podUID="de7615f0-5173-4b64-8f4d-ba4da37884b6" containerName="oauth-openshift" containerID="cri-o://1d0efa609fef276c4506be8fe082e9dc4c3eff1473648ae8af120fa2561e02f8" gracePeriod=15 Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.768486 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.807464 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-774c6c58b6-2phlq"] Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808281 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0f6a3ac4-dcc2-4fbd-8699-d97127b35495" containerName="registry-server" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808307 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6a3ac4-dcc2-4fbd-8699-d97127b35495" containerName="registry-server" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808325 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0f6a3ac4-dcc2-4fbd-8699-d97127b35495" containerName="extract-utilities" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808334 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6a3ac4-dcc2-4fbd-8699-d97127b35495" containerName="extract-utilities" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808347 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0f6a3ac4-dcc2-4fbd-8699-d97127b35495" containerName="extract-content" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808355 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6a3ac4-dcc2-4fbd-8699-d97127b35495" containerName="extract-content" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808377 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb826094-3e88-481d-bf22-ad5c3eb0f280" containerName="extract-content" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808385 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb826094-3e88-481d-bf22-ad5c3eb0f280" containerName="extract-content" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808403 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb826094-3e88-481d-bf22-ad5c3eb0f280" containerName="registry-server" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808412 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb826094-3e88-481d-bf22-ad5c3eb0f280" containerName="registry-server" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808424 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="027b046b-01a1-48d8-a6b7-d03fd6509f1f" containerName="extract-content" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808431 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="027b046b-01a1-48d8-a6b7-d03fd6509f1f" containerName="extract-content" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808444 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="de7615f0-5173-4b64-8f4d-ba4da37884b6" containerName="oauth-openshift" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808451 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7615f0-5173-4b64-8f4d-ba4da37884b6" containerName="oauth-openshift" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808463 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="027b046b-01a1-48d8-a6b7-d03fd6509f1f" containerName="extract-utilities" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808470 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="027b046b-01a1-48d8-a6b7-d03fd6509f1f" containerName="extract-utilities" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808482 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb826094-3e88-481d-bf22-ad5c3eb0f280" containerName="extract-utilities" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808489 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb826094-3e88-481d-bf22-ad5c3eb0f280" containerName="extract-utilities" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808506 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="027b046b-01a1-48d8-a6b7-d03fd6509f1f" containerName="registry-server" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808514 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="027b046b-01a1-48d8-a6b7-d03fd6509f1f" containerName="registry-server" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808644 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="027b046b-01a1-48d8-a6b7-d03fd6509f1f" containerName="registry-server" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808664 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="0f6a3ac4-dcc2-4fbd-8699-d97127b35495" containerName="registry-server" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808675 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="fb826094-3e88-481d-bf22-ad5c3eb0f280" containerName="registry-server" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.808689 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="de7615f0-5173-4b64-8f4d-ba4da37884b6" containerName="oauth-openshift" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.812722 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.819370 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-774c6c58b6-2phlq"] Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.897129 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-provider-selection\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.897216 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-trusted-ca-bundle\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.897251 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-error\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.897274 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-session\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.897309 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-idp-0-file-data\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.897349 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-router-certs\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.897372 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhplh\" (UniqueName: \"kubernetes.io/projected/de7615f0-5173-4b64-8f4d-ba4da37884b6-kube-api-access-nhplh\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898245 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898320 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.897417 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-service-ca\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898502 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-audit-policies\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898549 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-ocp-branding-template\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898572 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-serving-cert\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898609 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-login\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898633 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de7615f0-5173-4b64-8f4d-ba4da37884b6-audit-dir\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898670 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-cliconfig\") pod \"de7615f0-5173-4b64-8f4d-ba4da37884b6\" (UID: \"de7615f0-5173-4b64-8f4d-ba4da37884b6\") " Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898747 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898786 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-user-template-login\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898803 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898834 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898850 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-serving-cert\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898874 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fd9d9e8-bdb0-4e80-96c3-551286865222-audit-policies\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898892 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-router-certs\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898923 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-cliconfig\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898939 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-session\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898961 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-user-template-error\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.898992 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-service-ca\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.899016 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.899046 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fd9d9e8-bdb0-4e80-96c3-551286865222-audit-dir\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.899062 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzlwb\" (UniqueName: \"kubernetes.io/projected/0fd9d9e8-bdb0-4e80-96c3-551286865222-kube-api-access-bzlwb\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.899138 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.899150 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.899169 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de7615f0-5173-4b64-8f4d-ba4da37884b6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.900161 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.902415 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.904772 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.907519 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.907623 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de7615f0-5173-4b64-8f4d-ba4da37884b6-kube-api-access-nhplh" (OuterVolumeSpecName: "kube-api-access-nhplh") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "kube-api-access-nhplh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.907689 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.908201 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.908267 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.908492 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.908646 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:01 crc kubenswrapper[5112]: I1208 17:44:01.909278 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "de7615f0-5173-4b64-8f4d-ba4da37884b6" (UID: "de7615f0-5173-4b64-8f4d-ba4da37884b6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.000784 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.000843 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fd9d9e8-bdb0-4e80-96c3-551286865222-audit-dir\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.000863 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bzlwb\" (UniqueName: \"kubernetes.io/projected/0fd9d9e8-bdb0-4e80-96c3-551286865222-kube-api-access-bzlwb\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.000895 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.000921 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-user-template-login\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.000938 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.000978 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001005 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-serving-cert\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001034 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fd9d9e8-bdb0-4e80-96c3-551286865222-audit-policies\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001051 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-router-certs\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001096 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-cliconfig\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001114 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-session\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001132 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-user-template-error\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001161 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-service-ca\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001203 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001213 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nhplh\" (UniqueName: \"kubernetes.io/projected/de7615f0-5173-4b64-8f4d-ba4da37884b6-kube-api-access-nhplh\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001221 5112 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001232 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001241 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001250 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001260 5112 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/de7615f0-5173-4b64-8f4d-ba4da37884b6-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001269 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001278 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001288 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001305 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.001324 5112 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/de7615f0-5173-4b64-8f4d-ba4da37884b6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.000978 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fd9d9e8-bdb0-4e80-96c3-551286865222-audit-dir\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.002064 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-cliconfig\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.002336 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fd9d9e8-bdb0-4e80-96c3-551286865222-audit-policies\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.002685 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-service-ca\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.002750 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.005073 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-session\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.005117 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.005605 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.006175 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.006224 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-router-certs\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.006274 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-system-serving-cert\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.006852 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-user-template-error\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.006919 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0fd9d9e8-bdb0-4e80-96c3-551286865222-v4-0-config-user-template-login\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.018820 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzlwb\" (UniqueName: \"kubernetes.io/projected/0fd9d9e8-bdb0-4e80-96c3-551286865222-kube-api-access-bzlwb\") pod \"oauth-openshift-774c6c58b6-2phlq\" (UID: \"0fd9d9e8-bdb0-4e80-96c3-551286865222\") " pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.135986 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.354616 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-774c6c58b6-2phlq"] Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.419203 5112 generic.go:358] "Generic (PLEG): container finished" podID="de7615f0-5173-4b64-8f4d-ba4da37884b6" containerID="1d0efa609fef276c4506be8fe082e9dc4c3eff1473648ae8af120fa2561e02f8" exitCode=0 Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.419280 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-74dth" event={"ID":"de7615f0-5173-4b64-8f4d-ba4da37884b6","Type":"ContainerDied","Data":"1d0efa609fef276c4506be8fe082e9dc4c3eff1473648ae8af120fa2561e02f8"} Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.419353 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-74dth" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.419376 5112 scope.go:117] "RemoveContainer" containerID="1d0efa609fef276c4506be8fe082e9dc4c3eff1473648ae8af120fa2561e02f8" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.419359 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-74dth" event={"ID":"de7615f0-5173-4b64-8f4d-ba4da37884b6","Type":"ContainerDied","Data":"44bcde4355845cae0a794e490fbed911fe5c3f32e16149ef3aa2a1a60f583ced"} Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.422771 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" event={"ID":"0fd9d9e8-bdb0-4e80-96c3-551286865222","Type":"ContainerStarted","Data":"cadf832962736a16e38345dda5e93f1786f37584c7338f2624450ac633645d23"} Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.453033 5112 scope.go:117] "RemoveContainer" containerID="1d0efa609fef276c4506be8fe082e9dc4c3eff1473648ae8af120fa2561e02f8" Dec 08 17:44:02 crc kubenswrapper[5112]: E1208 17:44:02.453442 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d0efa609fef276c4506be8fe082e9dc4c3eff1473648ae8af120fa2561e02f8\": container with ID starting with 1d0efa609fef276c4506be8fe082e9dc4c3eff1473648ae8af120fa2561e02f8 not found: ID does not exist" containerID="1d0efa609fef276c4506be8fe082e9dc4c3eff1473648ae8af120fa2561e02f8" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.453486 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d0efa609fef276c4506be8fe082e9dc4c3eff1473648ae8af120fa2561e02f8"} err="failed to get container status \"1d0efa609fef276c4506be8fe082e9dc4c3eff1473648ae8af120fa2561e02f8\": rpc error: code = NotFound desc = could not find container \"1d0efa609fef276c4506be8fe082e9dc4c3eff1473648ae8af120fa2561e02f8\": container with ID starting with 1d0efa609fef276c4506be8fe082e9dc4c3eff1473648ae8af120fa2561e02f8 not found: ID does not exist" Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.456551 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-74dth"] Dec 08 17:44:02 crc kubenswrapper[5112]: I1208 17:44:02.459794 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-74dth"] Dec 08 17:44:03 crc kubenswrapper[5112]: I1208 17:44:03.326853 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de7615f0-5173-4b64-8f4d-ba4da37884b6" path="/var/lib/kubelet/pods/de7615f0-5173-4b64-8f4d-ba4da37884b6/volumes" Dec 08 17:44:03 crc kubenswrapper[5112]: I1208 17:44:03.431463 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" event={"ID":"0fd9d9e8-bdb0-4e80-96c3-551286865222","Type":"ContainerStarted","Data":"3244ae38828cafefc7f24406a706b440d2886ece24cdee1d4ae4750fddd3613f"} Dec 08 17:44:03 crc kubenswrapper[5112]: I1208 17:44:03.431675 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:03 crc kubenswrapper[5112]: I1208 17:44:03.464901 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" podStartSLOduration=27.464868972 podStartE2EDuration="27.464868972s" podCreationTimestamp="2025-12-08 17:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:03.457627457 +0000 UTC m=+220.467176158" watchObservedRunningTime="2025-12-08 17:44:03.464868972 +0000 UTC m=+220.474417693" Dec 08 17:44:03 crc kubenswrapper[5112]: I1208 17:44:03.696229 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-774c6c58b6-2phlq" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.797719 5112 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.812423 5112 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.812512 5112 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.812836 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.813243 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1" gracePeriod=15 Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.813279 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7ef6cbbdf721c409b69927a89643bb23313daa2a55d76df271c59ced0881af9d" gracePeriod=15 Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.813361 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa" gracePeriod=15 Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.813347 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0" gracePeriod=15 Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.813291 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4" gracePeriod=15 Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814385 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814406 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814418 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814424 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814432 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814439 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814449 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814455 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814462 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814467 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814476 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814481 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814497 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814502 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814511 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814516 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814524 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.814529 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.817315 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.817374 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.817389 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.817467 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.817486 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.817544 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.817566 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.817878 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.817902 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.818106 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.818124 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.821716 5112 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.855019 5112 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:05 crc kubenswrapper[5112]: E1208 17:44:05.856403 5112 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.959554 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.959942 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.959969 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.959985 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.960006 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.960066 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.960144 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.960174 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.960475 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:05 crc kubenswrapper[5112]: I1208 17:44:05.960503 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061203 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061260 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061276 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061300 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061317 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061381 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061410 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061487 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061583 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061692 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061747 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061771 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061887 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061899 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061935 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061906 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.061999 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.062010 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.062040 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.062041 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.158463 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:06 crc kubenswrapper[5112]: W1208 17:44:06.191371 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-1d615d0993839270778d23268e794f2cf28d73c6672ded9e5901082dbd4b6ee4 WatchSource:0}: Error finding container 1d615d0993839270778d23268e794f2cf28d73c6672ded9e5901082dbd4b6ee4: Status 404 returned error can't find the container with id 1d615d0993839270778d23268e794f2cf28d73c6672ded9e5901082dbd4b6ee4 Dec 08 17:44:06 crc kubenswrapper[5112]: E1208 17:44:06.195143 5112 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f4e75a84e95bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:44:06.194386367 +0000 UTC m=+223.203935078,LastTimestamp:2025-12-08 17:44:06.194386367 +0000 UTC m=+223.203935078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.452312 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"1d615d0993839270778d23268e794f2cf28d73c6672ded9e5901082dbd4b6ee4"} Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.454057 5112 generic.go:358] "Generic (PLEG): container finished" podID="ec521621-eb82-4b99-bd04-c1256bd46f3d" containerID="801af624bfb20fb357bc073ac09b2ad9e88d500474d82e7f3af4d494208670f0" exitCode=0 Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.454146 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"ec521621-eb82-4b99-bd04-c1256bd46f3d","Type":"ContainerDied","Data":"801af624bfb20fb357bc073ac09b2ad9e88d500474d82e7f3af4d494208670f0"} Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.454850 5112 status_manager.go:895] "Failed to get status for pod" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.456479 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.457946 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.458631 5112 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7ef6cbbdf721c409b69927a89643bb23313daa2a55d76df271c59ced0881af9d" exitCode=0 Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.458647 5112 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa" exitCode=0 Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.458654 5112 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4" exitCode=0 Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.458661 5112 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0" exitCode=2 Dec 08 17:44:06 crc kubenswrapper[5112]: I1208 17:44:06.458722 5112 scope.go:117] "RemoveContainer" containerID="7a8bdf5c7af29378b589fe72e7884a1b14b3ee02680ca8842031e246f786fa59" Dec 08 17:44:07 crc kubenswrapper[5112]: E1208 17:44:07.091657 5112 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f4e75a84e95bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:44:06.194386367 +0000 UTC m=+223.203935078,LastTimestamp:2025-12-08 17:44:06.194386367 +0000 UTC m=+223.203935078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:44:07 crc kubenswrapper[5112]: I1208 17:44:07.465726 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"297a21012e286735157b1fa600a73370e07196b403fee7aa06754cb222d72dcd"} Dec 08 17:44:07 crc kubenswrapper[5112]: I1208 17:44:07.465922 5112 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:07 crc kubenswrapper[5112]: I1208 17:44:07.466250 5112 status_manager.go:895] "Failed to get status for pod" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:07 crc kubenswrapper[5112]: E1208 17:44:07.466441 5112 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:07 crc kubenswrapper[5112]: I1208 17:44:07.468538 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:44:07 crc kubenswrapper[5112]: I1208 17:44:07.730777 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:07 crc kubenswrapper[5112]: I1208 17:44:07.731396 5112 status_manager.go:895] "Failed to get status for pod" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:07 crc kubenswrapper[5112]: I1208 17:44:07.898949 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec521621-eb82-4b99-bd04-c1256bd46f3d-var-lock\") pod \"ec521621-eb82-4b99-bd04-c1256bd46f3d\" (UID: \"ec521621-eb82-4b99-bd04-c1256bd46f3d\") " Dec 08 17:44:07 crc kubenswrapper[5112]: I1208 17:44:07.899822 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec521621-eb82-4b99-bd04-c1256bd46f3d-var-lock" (OuterVolumeSpecName: "var-lock") pod "ec521621-eb82-4b99-bd04-c1256bd46f3d" (UID: "ec521621-eb82-4b99-bd04-c1256bd46f3d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:07 crc kubenswrapper[5112]: I1208 17:44:07.899955 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec521621-eb82-4b99-bd04-c1256bd46f3d-kube-api-access\") pod \"ec521621-eb82-4b99-bd04-c1256bd46f3d\" (UID: \"ec521621-eb82-4b99-bd04-c1256bd46f3d\") " Dec 08 17:44:07 crc kubenswrapper[5112]: I1208 17:44:07.900024 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec521621-eb82-4b99-bd04-c1256bd46f3d-kubelet-dir\") pod \"ec521621-eb82-4b99-bd04-c1256bd46f3d\" (UID: \"ec521621-eb82-4b99-bd04-c1256bd46f3d\") " Dec 08 17:44:07 crc kubenswrapper[5112]: I1208 17:44:07.900408 5112 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec521621-eb82-4b99-bd04-c1256bd46f3d-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:07 crc kubenswrapper[5112]: I1208 17:44:07.900439 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec521621-eb82-4b99-bd04-c1256bd46f3d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ec521621-eb82-4b99-bd04-c1256bd46f3d" (UID: "ec521621-eb82-4b99-bd04-c1256bd46f3d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:07 crc kubenswrapper[5112]: I1208 17:44:07.933424 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec521621-eb82-4b99-bd04-c1256bd46f3d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ec521621-eb82-4b99-bd04-c1256bd46f3d" (UID: "ec521621-eb82-4b99-bd04-c1256bd46f3d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.001521 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec521621-eb82-4b99-bd04-c1256bd46f3d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.001559 5112 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec521621-eb82-4b99-bd04-c1256bd46f3d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.212176 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.213552 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.214258 5112 status_manager.go:895] "Failed to get status for pod" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.214730 5112 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.408756 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.408838 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.408906 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.408986 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.409036 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.409114 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.409487 5112 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.409498 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.409509 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.409656 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.412667 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.480470 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"ec521621-eb82-4b99-bd04-c1256bd46f3d","Type":"ContainerDied","Data":"5aee8da2acbae57a70ae21b4f3110fe8c2d77d9a818048f9e609b77f57fa4371"} Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.480511 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.480530 5112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aee8da2acbae57a70ae21b4f3110fe8c2d77d9a818048f9e609b77f57fa4371" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.483971 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.485128 5112 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1" exitCode=0 Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.485300 5112 scope.go:117] "RemoveContainer" containerID="7ef6cbbdf721c409b69927a89643bb23313daa2a55d76df271c59ced0881af9d" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.485316 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.486230 5112 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:08 crc kubenswrapper[5112]: E1208 17:44:08.487068 5112 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.504324 5112 status_manager.go:895] "Failed to get status for pod" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.504879 5112 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.505487 5112 scope.go:117] "RemoveContainer" containerID="43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.508229 5112 status_manager.go:895] "Failed to get status for pod" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.508507 5112 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.511230 5112 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.511269 5112 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.511286 5112 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.511303 5112 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.520016 5112 scope.go:117] "RemoveContainer" containerID="26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.535836 5112 scope.go:117] "RemoveContainer" containerID="62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.553032 5112 scope.go:117] "RemoveContainer" containerID="635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.572431 5112 scope.go:117] "RemoveContainer" containerID="3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.626962 5112 scope.go:117] "RemoveContainer" containerID="7ef6cbbdf721c409b69927a89643bb23313daa2a55d76df271c59ced0881af9d" Dec 08 17:44:08 crc kubenswrapper[5112]: E1208 17:44:08.627615 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef6cbbdf721c409b69927a89643bb23313daa2a55d76df271c59ced0881af9d\": container with ID starting with 7ef6cbbdf721c409b69927a89643bb23313daa2a55d76df271c59ced0881af9d not found: ID does not exist" containerID="7ef6cbbdf721c409b69927a89643bb23313daa2a55d76df271c59ced0881af9d" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.627678 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef6cbbdf721c409b69927a89643bb23313daa2a55d76df271c59ced0881af9d"} err="failed to get container status \"7ef6cbbdf721c409b69927a89643bb23313daa2a55d76df271c59ced0881af9d\": rpc error: code = NotFound desc = could not find container \"7ef6cbbdf721c409b69927a89643bb23313daa2a55d76df271c59ced0881af9d\": container with ID starting with 7ef6cbbdf721c409b69927a89643bb23313daa2a55d76df271c59ced0881af9d not found: ID does not exist" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.627713 5112 scope.go:117] "RemoveContainer" containerID="43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa" Dec 08 17:44:08 crc kubenswrapper[5112]: E1208 17:44:08.628131 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\": container with ID starting with 43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa not found: ID does not exist" containerID="43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.628167 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa"} err="failed to get container status \"43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\": rpc error: code = NotFound desc = could not find container \"43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa\": container with ID starting with 43d313abdeed2aafe14cb80ae38e97b5bbdc62197de02664fbc6c17597822faa not found: ID does not exist" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.628190 5112 scope.go:117] "RemoveContainer" containerID="26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4" Dec 08 17:44:08 crc kubenswrapper[5112]: E1208 17:44:08.628524 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\": container with ID starting with 26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4 not found: ID does not exist" containerID="26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.628573 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4"} err="failed to get container status \"26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\": rpc error: code = NotFound desc = could not find container \"26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4\": container with ID starting with 26119bf950efdaf6bdf29fdd288dcde55f46e233074dadd05d09ae9662a98ac4 not found: ID does not exist" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.628604 5112 scope.go:117] "RemoveContainer" containerID="62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0" Dec 08 17:44:08 crc kubenswrapper[5112]: E1208 17:44:08.628943 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\": container with ID starting with 62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0 not found: ID does not exist" containerID="62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.628982 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0"} err="failed to get container status \"62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\": rpc error: code = NotFound desc = could not find container \"62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0\": container with ID starting with 62545f7eda9644c677205810fa88d197c6edaf171a4f7972db714c29d0e0c0f0 not found: ID does not exist" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.629009 5112 scope.go:117] "RemoveContainer" containerID="635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1" Dec 08 17:44:08 crc kubenswrapper[5112]: E1208 17:44:08.629382 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\": container with ID starting with 635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1 not found: ID does not exist" containerID="635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.629415 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1"} err="failed to get container status \"635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\": rpc error: code = NotFound desc = could not find container \"635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1\": container with ID starting with 635a0eb4e03cd1986945a33c0535e16c4f6d5cd1fbcbdb9e1ecf08f5ef7f71a1 not found: ID does not exist" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.629434 5112 scope.go:117] "RemoveContainer" containerID="3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397" Dec 08 17:44:08 crc kubenswrapper[5112]: E1208 17:44:08.629632 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\": container with ID starting with 3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397 not found: ID does not exist" containerID="3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397" Dec 08 17:44:08 crc kubenswrapper[5112]: I1208 17:44:08.629665 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397"} err="failed to get container status \"3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\": rpc error: code = NotFound desc = could not find container \"3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397\": container with ID starting with 3fbd9e0043ef37c10fbc9823215d21c9b1eee4025f73932cd6cbf3055d5f2397 not found: ID does not exist" Dec 08 17:44:09 crc kubenswrapper[5112]: I1208 17:44:09.325644 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 08 17:44:11 crc kubenswrapper[5112]: E1208 17:44:11.566787 5112 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:11 crc kubenswrapper[5112]: E1208 17:44:11.568674 5112 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:11 crc kubenswrapper[5112]: E1208 17:44:11.569416 5112 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:11 crc kubenswrapper[5112]: E1208 17:44:11.569694 5112 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:11 crc kubenswrapper[5112]: E1208 17:44:11.570029 5112 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:11 crc kubenswrapper[5112]: I1208 17:44:11.570056 5112 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 08 17:44:11 crc kubenswrapper[5112]: E1208 17:44:11.570363 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="200ms" Dec 08 17:44:11 crc kubenswrapper[5112]: I1208 17:44:11.706666 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:44:11 crc kubenswrapper[5112]: I1208 17:44:11.706740 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:44:11 crc kubenswrapper[5112]: E1208 17:44:11.771785 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="400ms" Dec 08 17:44:12 crc kubenswrapper[5112]: E1208 17:44:12.172455 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="800ms" Dec 08 17:44:12 crc kubenswrapper[5112]: E1208 17:44:12.973801 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="1.6s" Dec 08 17:44:13 crc kubenswrapper[5112]: I1208 17:44:13.321857 5112 status_manager.go:895] "Failed to get status for pod" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:14 crc kubenswrapper[5112]: E1208 17:44:14.575380 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="3.2s" Dec 08 17:44:17 crc kubenswrapper[5112]: E1208 17:44:17.093221 5112 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f4e75a84e95bf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:44:06.194386367 +0000 UTC m=+223.203935078,LastTimestamp:2025-12-08 17:44:06.194386367 +0000 UTC m=+223.203935078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:44:17 crc kubenswrapper[5112]: E1208 17:44:17.776155 5112 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="6.4s" Dec 08 17:44:19 crc kubenswrapper[5112]: I1208 17:44:19.316417 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:19 crc kubenswrapper[5112]: I1208 17:44:19.318025 5112 status_manager.go:895] "Failed to get status for pod" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:19 crc kubenswrapper[5112]: I1208 17:44:19.332262 5112 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d35301b2-73ca-44c7-bb4c-e7e68d41ac54" Dec 08 17:44:19 crc kubenswrapper[5112]: I1208 17:44:19.332399 5112 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d35301b2-73ca-44c7-bb4c-e7e68d41ac54" Dec 08 17:44:19 crc kubenswrapper[5112]: E1208 17:44:19.332982 5112 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:19 crc kubenswrapper[5112]: I1208 17:44:19.333425 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:19 crc kubenswrapper[5112]: W1208 17:44:19.366422 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-28735c83786e9068006900b1a6f19acb49512a43eda5f8bb97320457484b5018 WatchSource:0}: Error finding container 28735c83786e9068006900b1a6f19acb49512a43eda5f8bb97320457484b5018: Status 404 returned error can't find the container with id 28735c83786e9068006900b1a6f19acb49512a43eda5f8bb97320457484b5018 Dec 08 17:44:19 crc kubenswrapper[5112]: I1208 17:44:19.556833 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"28735c83786e9068006900b1a6f19acb49512a43eda5f8bb97320457484b5018"} Dec 08 17:44:19 crc kubenswrapper[5112]: I1208 17:44:19.559838 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:44:19 crc kubenswrapper[5112]: I1208 17:44:19.559918 5112 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c" exitCode=1 Dec 08 17:44:19 crc kubenswrapper[5112]: I1208 17:44:19.560026 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c"} Dec 08 17:44:19 crc kubenswrapper[5112]: I1208 17:44:19.560506 5112 scope.go:117] "RemoveContainer" containerID="1f8f801ff9a8f57c7935a0dc59f8db7ead28e39b9b0235c5fbac0238e1ae4d1c" Dec 08 17:44:19 crc kubenswrapper[5112]: I1208 17:44:19.561042 5112 status_manager.go:895] "Failed to get status for pod" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:19 crc kubenswrapper[5112]: I1208 17:44:19.561456 5112 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:20 crc kubenswrapper[5112]: I1208 17:44:20.568341 5112 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="3327d2d541e086f312e7527600e6b65fe95ad8059ce369eea57e5179feec770b" exitCode=0 Dec 08 17:44:20 crc kubenswrapper[5112]: I1208 17:44:20.568648 5112 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d35301b2-73ca-44c7-bb4c-e7e68d41ac54" Dec 08 17:44:20 crc kubenswrapper[5112]: I1208 17:44:20.568673 5112 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d35301b2-73ca-44c7-bb4c-e7e68d41ac54" Dec 08 17:44:20 crc kubenswrapper[5112]: I1208 17:44:20.568631 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"3327d2d541e086f312e7527600e6b65fe95ad8059ce369eea57e5179feec770b"} Dec 08 17:44:20 crc kubenswrapper[5112]: E1208 17:44:20.569054 5112 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:20 crc kubenswrapper[5112]: I1208 17:44:20.569305 5112 status_manager.go:895] "Failed to get status for pod" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:20 crc kubenswrapper[5112]: I1208 17:44:20.569523 5112 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:20 crc kubenswrapper[5112]: I1208 17:44:20.572298 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:44:20 crc kubenswrapper[5112]: I1208 17:44:20.572405 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"3fff4f8141aa0fe7cd50076ce5cffd49684650761891c6a969f9f92275d7710b"} Dec 08 17:44:20 crc kubenswrapper[5112]: I1208 17:44:20.573126 5112 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:20 crc kubenswrapper[5112]: I1208 17:44:20.573445 5112 status_manager.go:895] "Failed to get status for pod" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Dec 08 17:44:21 crc kubenswrapper[5112]: I1208 17:44:21.582694 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"528e70890b63cb30a74e136e50ad1ae5f9631e3cc35e61be7f7fafb96d7c6906"} Dec 08 17:44:21 crc kubenswrapper[5112]: I1208 17:44:21.583124 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1503d517973be229a31831632feabbf3044630f65ccb4187eebb79479d6c3e80"} Dec 08 17:44:21 crc kubenswrapper[5112]: I1208 17:44:21.583140 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"c24cfd60bdcc98e42733b5fffa3c78000146db7a02e606c7243d82c1fc576426"} Dec 08 17:44:22 crc kubenswrapper[5112]: I1208 17:44:22.591662 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b9a4a89fc7cd0805e9f54a344f36f3064a77ceee936da8ac34319bfc8c354b9e"} Dec 08 17:44:22 crc kubenswrapper[5112]: I1208 17:44:22.591951 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d617adf1e1319560393b79b20030e77e51332f0ac67a7f6d4555ac9f443f6e3d"} Dec 08 17:44:22 crc kubenswrapper[5112]: I1208 17:44:22.591967 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:22 crc kubenswrapper[5112]: I1208 17:44:22.592172 5112 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d35301b2-73ca-44c7-bb4c-e7e68d41ac54" Dec 08 17:44:22 crc kubenswrapper[5112]: I1208 17:44:22.592204 5112 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d35301b2-73ca-44c7-bb4c-e7e68d41ac54" Dec 08 17:44:24 crc kubenswrapper[5112]: I1208 17:44:24.333833 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:24 crc kubenswrapper[5112]: I1208 17:44:24.334238 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:24 crc kubenswrapper[5112]: I1208 17:44:24.345944 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:25 crc kubenswrapper[5112]: I1208 17:44:25.788889 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:44:25 crc kubenswrapper[5112]: I1208 17:44:25.795845 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:44:26 crc kubenswrapper[5112]: I1208 17:44:26.434412 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:44:27 crc kubenswrapper[5112]: I1208 17:44:27.602410 5112 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:27 crc kubenswrapper[5112]: I1208 17:44:27.603463 5112 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:27 crc kubenswrapper[5112]: I1208 17:44:27.621612 5112 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d35301b2-73ca-44c7-bb4c-e7e68d41ac54" Dec 08 17:44:27 crc kubenswrapper[5112]: I1208 17:44:27.621643 5112 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d35301b2-73ca-44c7-bb4c-e7e68d41ac54" Dec 08 17:44:27 crc kubenswrapper[5112]: I1208 17:44:27.626658 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:27 crc kubenswrapper[5112]: I1208 17:44:27.666641 5112 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="cc036a3c-32fc-49f3-934f-4c4298c3e13a" Dec 08 17:44:28 crc kubenswrapper[5112]: I1208 17:44:28.626688 5112 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d35301b2-73ca-44c7-bb4c-e7e68d41ac54" Dec 08 17:44:28 crc kubenswrapper[5112]: I1208 17:44:28.626726 5112 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d35301b2-73ca-44c7-bb4c-e7e68d41ac54" Dec 08 17:44:28 crc kubenswrapper[5112]: I1208 17:44:28.630229 5112 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="cc036a3c-32fc-49f3-934f-4c4298c3e13a" Dec 08 17:44:37 crc kubenswrapper[5112]: I1208 17:44:37.361567 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:37 crc kubenswrapper[5112]: I1208 17:44:37.511317 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 17:44:37 crc kubenswrapper[5112]: I1208 17:44:37.627016 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:44:38 crc kubenswrapper[5112]: I1208 17:44:38.204322 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:38 crc kubenswrapper[5112]: I1208 17:44:38.287016 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 17:44:38 crc kubenswrapper[5112]: I1208 17:44:38.378215 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:44:38 crc kubenswrapper[5112]: I1208 17:44:38.392394 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 17:44:38 crc kubenswrapper[5112]: I1208 17:44:38.398777 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 17:44:38 crc kubenswrapper[5112]: I1208 17:44:38.627232 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 17:44:38 crc kubenswrapper[5112]: I1208 17:44:38.643368 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 17:44:38 crc kubenswrapper[5112]: I1208 17:44:38.651202 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:38 crc kubenswrapper[5112]: I1208 17:44:38.693066 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 17:44:38 crc kubenswrapper[5112]: I1208 17:44:38.759515 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 17:44:39 crc kubenswrapper[5112]: I1208 17:44:39.006343 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 17:44:39 crc kubenswrapper[5112]: I1208 17:44:39.542377 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 17:44:39 crc kubenswrapper[5112]: I1208 17:44:39.592047 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 17:44:39 crc kubenswrapper[5112]: I1208 17:44:39.931722 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 17:44:39 crc kubenswrapper[5112]: I1208 17:44:39.945268 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 17:44:39 crc kubenswrapper[5112]: I1208 17:44:39.971028 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 17:44:40 crc kubenswrapper[5112]: I1208 17:44:40.471514 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 17:44:40 crc kubenswrapper[5112]: I1208 17:44:40.494580 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 17:44:40 crc kubenswrapper[5112]: I1208 17:44:40.591899 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 17:44:40 crc kubenswrapper[5112]: I1208 17:44:40.841819 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:40 crc kubenswrapper[5112]: I1208 17:44:40.844832 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 17:44:40 crc kubenswrapper[5112]: I1208 17:44:40.845340 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 17:44:40 crc kubenswrapper[5112]: I1208 17:44:40.898652 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 17:44:40 crc kubenswrapper[5112]: I1208 17:44:40.922294 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 17:44:40 crc kubenswrapper[5112]: I1208 17:44:40.987290 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 17:44:40 crc kubenswrapper[5112]: I1208 17:44:40.998460 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.042467 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.105018 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.153160 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.166383 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.257890 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.371532 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.409641 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.442917 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.549025 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.551337 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.613764 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.706941 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.707016 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.742733 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.844949 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 17:44:41 crc kubenswrapper[5112]: I1208 17:44:41.905455 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.146352 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.177586 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.272386 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.272490 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.340742 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.350755 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.369327 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.375357 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.380252 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.392891 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.477016 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.566746 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.571636 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.577033 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.587451 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.803304 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.811267 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.887421 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:42 crc kubenswrapper[5112]: I1208 17:44:42.926290 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.050345 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.075951 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.084946 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.128814 5112 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.149939 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.165582 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.189697 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.195270 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.254523 5112 ???:1] "http: TLS handshake error from 192.168.126.11:52298: no serving certificate available for the kubelet" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.358945 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.415964 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.521764 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.598302 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.655723 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.681600 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.687422 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.754679 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.761933 5112 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.834009 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.846636 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.848873 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 17:44:43 crc kubenswrapper[5112]: I1208 17:44:43.970573 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 17:44:44 crc kubenswrapper[5112]: I1208 17:44:44.268511 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:44 crc kubenswrapper[5112]: I1208 17:44:44.308061 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 17:44:44 crc kubenswrapper[5112]: I1208 17:44:44.351320 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:44 crc kubenswrapper[5112]: I1208 17:44:44.394913 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 17:44:44 crc kubenswrapper[5112]: I1208 17:44:44.499268 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 17:44:44 crc kubenswrapper[5112]: I1208 17:44:44.547313 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 17:44:44 crc kubenswrapper[5112]: I1208 17:44:44.569526 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 17:44:44 crc kubenswrapper[5112]: I1208 17:44:44.608201 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 17:44:44 crc kubenswrapper[5112]: I1208 17:44:44.623729 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 17:44:44 crc kubenswrapper[5112]: I1208 17:44:44.721946 5112 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:44:44 crc kubenswrapper[5112]: I1208 17:44:44.802003 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 17:44:44 crc kubenswrapper[5112]: I1208 17:44:44.956254 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:44 crc kubenswrapper[5112]: I1208 17:44:44.961218 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.048637 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.048693 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.205120 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.244612 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.245293 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.264794 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.280465 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.286741 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.337296 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.345085 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.419360 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.551247 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.695472 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.800711 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.885646 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.897232 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.920577 5112 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.957519 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.959480 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.961460 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 17:44:45 crc kubenswrapper[5112]: I1208 17:44:45.992786 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.040634 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.183233 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.244583 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.290764 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.295736 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.359311 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.405481 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.421280 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.424479 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.454690 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.462483 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.521298 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.587550 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.594181 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.642907 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.683178 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.700776 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.730285 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.749112 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.928417 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 17:44:46 crc kubenswrapper[5112]: I1208 17:44:46.939712 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.001297 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.013742 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.074628 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.099251 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.108769 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.253280 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.262143 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.375787 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.462175 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.590213 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.743247 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.819396 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.832210 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.862189 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.888982 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.897019 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.938491 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.973599 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 17:44:47 crc kubenswrapper[5112]: I1208 17:44:47.981113 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.166700 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.179953 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.185990 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.245502 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.252465 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.279298 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.283174 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.307181 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.348541 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.434834 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.461163 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.476720 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.489575 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.505308 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.589408 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.612934 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.630590 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.732384 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.737628 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.775301 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.781160 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.822507 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.853955 5112 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.870410 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.877064 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.916510 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.917454 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:44:48 crc kubenswrapper[5112]: I1208 17:44:48.997026 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.035678 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.036476 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.204855 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.229737 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.285709 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.325609 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.338806 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.360773 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.397883 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.405985 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.421383 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.600117 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.743169 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.898016 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:49 crc kubenswrapper[5112]: I1208 17:44:49.958145 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.044765 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.170045 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.194712 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.250338 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.262371 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.308772 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.310037 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.347159 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.366145 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.392520 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.394278 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.397973 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.530915 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.570223 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.714010 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.725584 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.774837 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.811962 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.929863 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 17:44:50 crc kubenswrapper[5112]: I1208 17:44:50.989737 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.041630 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.070027 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.184905 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.307296 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.345105 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.521136 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.539246 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.706786 5112 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.711736 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.711802 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.717521 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.766340 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.766283375 podStartE2EDuration="24.766283375s" podCreationTimestamp="2025-12-08 17:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:51.742244797 +0000 UTC m=+268.751793488" watchObservedRunningTime="2025-12-08 17:44:51.766283375 +0000 UTC m=+268.775832096" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.770461 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.920377 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.932992 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:51 crc kubenswrapper[5112]: I1208 17:44:51.975989 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.032191 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.167921 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.242034 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.273560 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.291850 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.337270 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.349914 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.478358 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.497313 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.603363 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.664266 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.733063 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.781125 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.835137 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 17:44:52 crc kubenswrapper[5112]: I1208 17:44:52.997257 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 17:44:53 crc kubenswrapper[5112]: I1208 17:44:53.021716 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:44:53 crc kubenswrapper[5112]: I1208 17:44:53.057565 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 17:44:53 crc kubenswrapper[5112]: I1208 17:44:53.083950 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 17:44:53 crc kubenswrapper[5112]: I1208 17:44:53.177210 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 17:44:53 crc kubenswrapper[5112]: I1208 17:44:53.239881 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 17:44:53 crc kubenswrapper[5112]: I1208 17:44:53.568253 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 17:44:53 crc kubenswrapper[5112]: I1208 17:44:53.640417 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 17:44:54 crc kubenswrapper[5112]: I1208 17:44:54.336536 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.164123 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q"] Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.166121 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" containerName="installer" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.166139 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" containerName="installer" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.166313 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec521621-eb82-4b99-bd04-c1256bd46f3d" containerName="installer" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.179235 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q"] Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.179377 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.183999 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.183999 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.218722 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a40006cc-472a-45ef-a674-9178066e15da-secret-volume\") pod \"collect-profiles-29420265-q2z4q\" (UID: \"a40006cc-472a-45ef-a674-9178066e15da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.218779 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54p9w\" (UniqueName: \"kubernetes.io/projected/a40006cc-472a-45ef-a674-9178066e15da-kube-api-access-54p9w\") pod \"collect-profiles-29420265-q2z4q\" (UID: \"a40006cc-472a-45ef-a674-9178066e15da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.218969 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a40006cc-472a-45ef-a674-9178066e15da-config-volume\") pod \"collect-profiles-29420265-q2z4q\" (UID: \"a40006cc-472a-45ef-a674-9178066e15da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.320116 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a40006cc-472a-45ef-a674-9178066e15da-secret-volume\") pod \"collect-profiles-29420265-q2z4q\" (UID: \"a40006cc-472a-45ef-a674-9178066e15da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.320210 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-54p9w\" (UniqueName: \"kubernetes.io/projected/a40006cc-472a-45ef-a674-9178066e15da-kube-api-access-54p9w\") pod \"collect-profiles-29420265-q2z4q\" (UID: \"a40006cc-472a-45ef-a674-9178066e15da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.320331 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a40006cc-472a-45ef-a674-9178066e15da-config-volume\") pod \"collect-profiles-29420265-q2z4q\" (UID: \"a40006cc-472a-45ef-a674-9178066e15da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.321512 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a40006cc-472a-45ef-a674-9178066e15da-config-volume\") pod \"collect-profiles-29420265-q2z4q\" (UID: \"a40006cc-472a-45ef-a674-9178066e15da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.327801 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a40006cc-472a-45ef-a674-9178066e15da-secret-volume\") pod \"collect-profiles-29420265-q2z4q\" (UID: \"a40006cc-472a-45ef-a674-9178066e15da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.341539 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-54p9w\" (UniqueName: \"kubernetes.io/projected/a40006cc-472a-45ef-a674-9178066e15da-kube-api-access-54p9w\") pod \"collect-profiles-29420265-q2z4q\" (UID: \"a40006cc-472a-45ef-a674-9178066e15da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.495182 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" Dec 08 17:45:00 crc kubenswrapper[5112]: I1208 17:45:00.890271 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q"] Dec 08 17:45:01 crc kubenswrapper[5112]: I1208 17:45:01.217541 5112 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 17:45:01 crc kubenswrapper[5112]: I1208 17:45:01.217932 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://297a21012e286735157b1fa600a73370e07196b403fee7aa06754cb222d72dcd" gracePeriod=5 Dec 08 17:45:01 crc kubenswrapper[5112]: I1208 17:45:01.811032 5112 generic.go:358] "Generic (PLEG): container finished" podID="a40006cc-472a-45ef-a674-9178066e15da" containerID="3f7c265187bef579dce434330851c378458c77e7df783875ac722ddf283836b9" exitCode=0 Dec 08 17:45:01 crc kubenswrapper[5112]: I1208 17:45:01.811453 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" event={"ID":"a40006cc-472a-45ef-a674-9178066e15da","Type":"ContainerDied","Data":"3f7c265187bef579dce434330851c378458c77e7df783875ac722ddf283836b9"} Dec 08 17:45:01 crc kubenswrapper[5112]: I1208 17:45:01.811481 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" event={"ID":"a40006cc-472a-45ef-a674-9178066e15da","Type":"ContainerStarted","Data":"a9192b8ac8726185f316ee8d8651cc1630cd8b57750e25ad48ef8426747f9ef6"} Dec 08 17:45:03 crc kubenswrapper[5112]: I1208 17:45:03.017640 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" Dec 08 17:45:03 crc kubenswrapper[5112]: I1208 17:45:03.056341 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a40006cc-472a-45ef-a674-9178066e15da-secret-volume\") pod \"a40006cc-472a-45ef-a674-9178066e15da\" (UID: \"a40006cc-472a-45ef-a674-9178066e15da\") " Dec 08 17:45:03 crc kubenswrapper[5112]: I1208 17:45:03.056438 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54p9w\" (UniqueName: \"kubernetes.io/projected/a40006cc-472a-45ef-a674-9178066e15da-kube-api-access-54p9w\") pod \"a40006cc-472a-45ef-a674-9178066e15da\" (UID: \"a40006cc-472a-45ef-a674-9178066e15da\") " Dec 08 17:45:03 crc kubenswrapper[5112]: I1208 17:45:03.056474 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a40006cc-472a-45ef-a674-9178066e15da-config-volume\") pod \"a40006cc-472a-45ef-a674-9178066e15da\" (UID: \"a40006cc-472a-45ef-a674-9178066e15da\") " Dec 08 17:45:03 crc kubenswrapper[5112]: I1208 17:45:03.058066 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a40006cc-472a-45ef-a674-9178066e15da-config-volume" (OuterVolumeSpecName: "config-volume") pod "a40006cc-472a-45ef-a674-9178066e15da" (UID: "a40006cc-472a-45ef-a674-9178066e15da"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:03 crc kubenswrapper[5112]: I1208 17:45:03.066249 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40006cc-472a-45ef-a674-9178066e15da-kube-api-access-54p9w" (OuterVolumeSpecName: "kube-api-access-54p9w") pod "a40006cc-472a-45ef-a674-9178066e15da" (UID: "a40006cc-472a-45ef-a674-9178066e15da"). InnerVolumeSpecName "kube-api-access-54p9w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:03 crc kubenswrapper[5112]: I1208 17:45:03.067455 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40006cc-472a-45ef-a674-9178066e15da-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a40006cc-472a-45ef-a674-9178066e15da" (UID: "a40006cc-472a-45ef-a674-9178066e15da"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:03 crc kubenswrapper[5112]: I1208 17:45:03.157913 5112 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a40006cc-472a-45ef-a674-9178066e15da-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:03 crc kubenswrapper[5112]: I1208 17:45:03.157957 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-54p9w\" (UniqueName: \"kubernetes.io/projected/a40006cc-472a-45ef-a674-9178066e15da-kube-api-access-54p9w\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:03 crc kubenswrapper[5112]: I1208 17:45:03.157966 5112 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a40006cc-472a-45ef-a674-9178066e15da-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:03 crc kubenswrapper[5112]: I1208 17:45:03.821364 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" event={"ID":"a40006cc-472a-45ef-a674-9178066e15da","Type":"ContainerDied","Data":"a9192b8ac8726185f316ee8d8651cc1630cd8b57750e25ad48ef8426747f9ef6"} Dec 08 17:45:03 crc kubenswrapper[5112]: I1208 17:45:03.821692 5112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9192b8ac8726185f316ee8d8651cc1630cd8b57750e25ad48ef8426747f9ef6" Dec 08 17:45:03 crc kubenswrapper[5112]: I1208 17:45:03.821828 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-q2z4q" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.789807 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.790321 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.791858 5112 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.841994 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.842056 5112 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="297a21012e286735157b1fa600a73370e07196b403fee7aa06754cb222d72dcd" exitCode=137 Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.842198 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.842251 5112 scope.go:117] "RemoveContainer" containerID="297a21012e286735157b1fa600a73370e07196b403fee7aa06754cb222d72dcd" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.862944 5112 scope.go:117] "RemoveContainer" containerID="297a21012e286735157b1fa600a73370e07196b403fee7aa06754cb222d72dcd" Dec 08 17:45:06 crc kubenswrapper[5112]: E1208 17:45:06.863384 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"297a21012e286735157b1fa600a73370e07196b403fee7aa06754cb222d72dcd\": container with ID starting with 297a21012e286735157b1fa600a73370e07196b403fee7aa06754cb222d72dcd not found: ID does not exist" containerID="297a21012e286735157b1fa600a73370e07196b403fee7aa06754cb222d72dcd" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.863427 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"297a21012e286735157b1fa600a73370e07196b403fee7aa06754cb222d72dcd"} err="failed to get container status \"297a21012e286735157b1fa600a73370e07196b403fee7aa06754cb222d72dcd\": rpc error: code = NotFound desc = could not find container \"297a21012e286735157b1fa600a73370e07196b403fee7aa06754cb222d72dcd\": container with ID starting with 297a21012e286735157b1fa600a73370e07196b403fee7aa06754cb222d72dcd not found: ID does not exist" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.916297 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.916376 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.916403 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.916430 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.916489 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.916512 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.916543 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.916556 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.916812 5112 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.916827 5112 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.916840 5112 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:06 crc kubenswrapper[5112]: I1208 17:45:06.916896 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:07 crc kubenswrapper[5112]: I1208 17:45:07.375429 5112 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:07 crc kubenswrapper[5112]: I1208 17:45:07.393448 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:07 crc kubenswrapper[5112]: I1208 17:45:07.457557 5112 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 17:45:07 crc kubenswrapper[5112]: I1208 17:45:07.477148 5112 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:09 crc kubenswrapper[5112]: I1208 17:45:09.340090 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 08 17:45:09 crc kubenswrapper[5112]: I1208 17:45:09.610955 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt"] Dec 08 17:45:09 crc kubenswrapper[5112]: I1208 17:45:09.611282 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" podUID="af065ece-a0e6-49a0-ba5e-21875f49cbd2" containerName="route-controller-manager" containerID="cri-o://75db4fd4ec545febaf46d652bb3fe582d6fe0aee68f5dbf0f58490bd5d97485d" gracePeriod=30 Dec 08 17:45:09 crc kubenswrapper[5112]: I1208 17:45:09.614619 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-p8dgq"] Dec 08 17:45:09 crc kubenswrapper[5112]: I1208 17:45:09.615132 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" podUID="7e0c9c4f-1216-499b-a1dd-be2f225cb97f" containerName="controller-manager" containerID="cri-o://3f8b95e90c456d5575829342acae5ef665f0c95e88f2e8e46d21e35baa84de6a" gracePeriod=30 Dec 08 17:45:09 crc kubenswrapper[5112]: I1208 17:45:09.866205 5112 generic.go:358] "Generic (PLEG): container finished" podID="7e0c9c4f-1216-499b-a1dd-be2f225cb97f" containerID="3f8b95e90c456d5575829342acae5ef665f0c95e88f2e8e46d21e35baa84de6a" exitCode=0 Dec 08 17:45:09 crc kubenswrapper[5112]: I1208 17:45:09.866371 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" event={"ID":"7e0c9c4f-1216-499b-a1dd-be2f225cb97f","Type":"ContainerDied","Data":"3f8b95e90c456d5575829342acae5ef665f0c95e88f2e8e46d21e35baa84de6a"} Dec 08 17:45:09 crc kubenswrapper[5112]: I1208 17:45:09.868478 5112 generic.go:358] "Generic (PLEG): container finished" podID="af065ece-a0e6-49a0-ba5e-21875f49cbd2" containerID="75db4fd4ec545febaf46d652bb3fe582d6fe0aee68f5dbf0f58490bd5d97485d" exitCode=0 Dec 08 17:45:09 crc kubenswrapper[5112]: I1208 17:45:09.868675 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" event={"ID":"af065ece-a0e6-49a0-ba5e-21875f49cbd2","Type":"ContainerDied","Data":"75db4fd4ec545febaf46d652bb3fe582d6fe0aee68f5dbf0f58490bd5d97485d"} Dec 08 17:45:09 crc kubenswrapper[5112]: I1208 17:45:09.984954 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:45:09 crc kubenswrapper[5112]: I1208 17:45:09.992659 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.007810 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-s9jc5"] Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.007870 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-proxy-ca-bundles\") pod \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.007966 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af065ece-a0e6-49a0-ba5e-21875f49cbd2-config\") pod \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.007996 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af065ece-a0e6-49a0-ba5e-21875f49cbd2-serving-cert\") pod \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008030 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2mq6\" (UniqueName: \"kubernetes.io/projected/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-kube-api-access-l2mq6\") pod \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008053 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-serving-cert\") pod \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008105 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-tmp\") pod \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008126 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwq8t\" (UniqueName: \"kubernetes.io/projected/af065ece-a0e6-49a0-ba5e-21875f49cbd2-kube-api-access-jwq8t\") pod \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008153 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af065ece-a0e6-49a0-ba5e-21875f49cbd2-client-ca\") pod \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008168 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-client-ca\") pod \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008197 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af065ece-a0e6-49a0-ba5e-21875f49cbd2-tmp\") pod \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\" (UID: \"af065ece-a0e6-49a0-ba5e-21875f49cbd2\") " Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008213 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-config\") pod \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\" (UID: \"7e0c9c4f-1216-499b-a1dd-be2f225cb97f\") " Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008406 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af065ece-a0e6-49a0-ba5e-21875f49cbd2" containerName="route-controller-manager" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008421 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="af065ece-a0e6-49a0-ba5e-21875f49cbd2" containerName="route-controller-manager" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008431 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008436 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008456 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7e0c9c4f-1216-499b-a1dd-be2f225cb97f" containerName="controller-manager" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008461 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0c9c4f-1216-499b-a1dd-be2f225cb97f" containerName="controller-manager" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008473 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a40006cc-472a-45ef-a674-9178066e15da" containerName="collect-profiles" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008479 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40006cc-472a-45ef-a674-9178066e15da" containerName="collect-profiles" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008567 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="7e0c9c4f-1216-499b-a1dd-be2f225cb97f" containerName="controller-manager" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008578 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="af065ece-a0e6-49a0-ba5e-21875f49cbd2" containerName="route-controller-manager" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008589 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="a40006cc-472a-45ef-a674-9178066e15da" containerName="collect-profiles" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008600 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008805 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7e0c9c4f-1216-499b-a1dd-be2f225cb97f" (UID: "7e0c9c4f-1216-499b-a1dd-be2f225cb97f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008919 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-config" (OuterVolumeSpecName: "config") pod "7e0c9c4f-1216-499b-a1dd-be2f225cb97f" (UID: "7e0c9c4f-1216-499b-a1dd-be2f225cb97f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.008972 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-tmp" (OuterVolumeSpecName: "tmp") pod "7e0c9c4f-1216-499b-a1dd-be2f225cb97f" (UID: "7e0c9c4f-1216-499b-a1dd-be2f225cb97f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.009379 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af065ece-a0e6-49a0-ba5e-21875f49cbd2-tmp" (OuterVolumeSpecName: "tmp") pod "af065ece-a0e6-49a0-ba5e-21875f49cbd2" (UID: "af065ece-a0e6-49a0-ba5e-21875f49cbd2"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.009431 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-client-ca" (OuterVolumeSpecName: "client-ca") pod "7e0c9c4f-1216-499b-a1dd-be2f225cb97f" (UID: "7e0c9c4f-1216-499b-a1dd-be2f225cb97f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.009488 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af065ece-a0e6-49a0-ba5e-21875f49cbd2-client-ca" (OuterVolumeSpecName: "client-ca") pod "af065ece-a0e6-49a0-ba5e-21875f49cbd2" (UID: "af065ece-a0e6-49a0-ba5e-21875f49cbd2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.009877 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af065ece-a0e6-49a0-ba5e-21875f49cbd2-config" (OuterVolumeSpecName: "config") pod "af065ece-a0e6-49a0-ba5e-21875f49cbd2" (UID: "af065ece-a0e6-49a0-ba5e-21875f49cbd2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.013749 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.021807 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af065ece-a0e6-49a0-ba5e-21875f49cbd2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "af065ece-a0e6-49a0-ba5e-21875f49cbd2" (UID: "af065ece-a0e6-49a0-ba5e-21875f49cbd2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.021915 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7e0c9c4f-1216-499b-a1dd-be2f225cb97f" (UID: "7e0c9c4f-1216-499b-a1dd-be2f225cb97f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.026211 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-s9jc5"] Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.030797 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-kube-api-access-l2mq6" (OuterVolumeSpecName: "kube-api-access-l2mq6") pod "7e0c9c4f-1216-499b-a1dd-be2f225cb97f" (UID: "7e0c9c4f-1216-499b-a1dd-be2f225cb97f"). InnerVolumeSpecName "kube-api-access-l2mq6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.034623 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af065ece-a0e6-49a0-ba5e-21875f49cbd2-kube-api-access-jwq8t" (OuterVolumeSpecName: "kube-api-access-jwq8t") pod "af065ece-a0e6-49a0-ba5e-21875f49cbd2" (UID: "af065ece-a0e6-49a0-ba5e-21875f49cbd2"). InnerVolumeSpecName "kube-api-access-jwq8t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.047391 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf"] Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.059332 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf"] Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.059457 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109388 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/451ffa8a-a736-4537-9311-f86b0306c5a9-config\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109424 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-client-ca\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109443 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkn7t\" (UniqueName: \"kubernetes.io/projected/451ffa8a-a736-4537-9311-f86b0306c5a9-kube-api-access-wkn7t\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109461 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/451ffa8a-a736-4537-9311-f86b0306c5a9-client-ca\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109476 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a70e904b-aaf4-4b41-998c-178dca51e32a-serving-cert\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109525 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/451ffa8a-a736-4537-9311-f86b0306c5a9-tmp\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109539 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a70e904b-aaf4-4b41-998c-178dca51e32a-tmp\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109605 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/451ffa8a-a736-4537-9311-f86b0306c5a9-serving-cert\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109645 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-proxy-ca-bundles\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109675 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db7gn\" (UniqueName: \"kubernetes.io/projected/a70e904b-aaf4-4b41-998c-178dca51e32a-kube-api-access-db7gn\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109699 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-config\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109765 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af065ece-a0e6-49a0-ba5e-21875f49cbd2-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109778 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l2mq6\" (UniqueName: \"kubernetes.io/projected/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-kube-api-access-l2mq6\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109787 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109795 5112 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109803 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jwq8t\" (UniqueName: \"kubernetes.io/projected/af065ece-a0e6-49a0-ba5e-21875f49cbd2-kube-api-access-jwq8t\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109812 5112 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/af065ece-a0e6-49a0-ba5e-21875f49cbd2-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109820 5112 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109828 5112 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af065ece-a0e6-49a0-ba5e-21875f49cbd2-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109835 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109843 5112 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e0c9c4f-1216-499b-a1dd-be2f225cb97f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.109851 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af065ece-a0e6-49a0-ba5e-21875f49cbd2-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.211540 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-client-ca\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.211825 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wkn7t\" (UniqueName: \"kubernetes.io/projected/451ffa8a-a736-4537-9311-f86b0306c5a9-kube-api-access-wkn7t\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.211898 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/451ffa8a-a736-4537-9311-f86b0306c5a9-client-ca\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.211940 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a70e904b-aaf4-4b41-998c-178dca51e32a-serving-cert\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.212851 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/451ffa8a-a736-4537-9311-f86b0306c5a9-client-ca\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.212876 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-client-ca\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.212888 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/451ffa8a-a736-4537-9311-f86b0306c5a9-tmp\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.212978 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a70e904b-aaf4-4b41-998c-178dca51e32a-tmp\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.213070 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/451ffa8a-a736-4537-9311-f86b0306c5a9-serving-cert\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.213226 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-proxy-ca-bundles\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.213280 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-db7gn\" (UniqueName: \"kubernetes.io/projected/a70e904b-aaf4-4b41-998c-178dca51e32a-kube-api-access-db7gn\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.213324 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/451ffa8a-a736-4537-9311-f86b0306c5a9-tmp\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.213372 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a70e904b-aaf4-4b41-998c-178dca51e32a-tmp\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.213510 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-config\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.213622 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/451ffa8a-a736-4537-9311-f86b0306c5a9-config\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.214453 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-proxy-ca-bundles\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.214518 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/451ffa8a-a736-4537-9311-f86b0306c5a9-config\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.215025 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-config\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.216677 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/451ffa8a-a736-4537-9311-f86b0306c5a9-serving-cert\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.216840 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a70e904b-aaf4-4b41-998c-178dca51e32a-serving-cert\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.232121 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-db7gn\" (UniqueName: \"kubernetes.io/projected/a70e904b-aaf4-4b41-998c-178dca51e32a-kube-api-access-db7gn\") pod \"controller-manager-dfd68485-s9jc5\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.232273 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkn7t\" (UniqueName: \"kubernetes.io/projected/451ffa8a-a736-4537-9311-f86b0306c5a9-kube-api-access-wkn7t\") pod \"route-controller-manager-5c6c48458c-8mgvf\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.355415 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.376706 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.548624 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-s9jc5"] Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.587897 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf"] Dec 08 17:45:10 crc kubenswrapper[5112]: W1208 17:45:10.596990 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod451ffa8a_a736_4537_9311_f86b0306c5a9.slice/crio-f39b13f6b4affbb5571cb62582f4767f79669e86cd1ad2842c408a37aca4d420 WatchSource:0}: Error finding container f39b13f6b4affbb5571cb62582f4767f79669e86cd1ad2842c408a37aca4d420: Status 404 returned error can't find the container with id f39b13f6b4affbb5571cb62582f4767f79669e86cd1ad2842c408a37aca4d420 Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.876373 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" event={"ID":"451ffa8a-a736-4537-9311-f86b0306c5a9","Type":"ContainerStarted","Data":"7e3353120644186f1b650273588055252c74f9361625d77e3f33a68bdbc39076"} Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.876428 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" event={"ID":"451ffa8a-a736-4537-9311-f86b0306c5a9","Type":"ContainerStarted","Data":"f39b13f6b4affbb5571cb62582f4767f79669e86cd1ad2842c408a37aca4d420"} Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.876643 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.877888 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.877912 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt" event={"ID":"af065ece-a0e6-49a0-ba5e-21875f49cbd2","Type":"ContainerDied","Data":"3bdf98e97399ca990caf022ca5b6064eafe9508bbf731ffed0191fffd9f51a21"} Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.877978 5112 scope.go:117] "RemoveContainer" containerID="75db4fd4ec545febaf46d652bb3fe582d6fe0aee68f5dbf0f58490bd5d97485d" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.879880 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" event={"ID":"a70e904b-aaf4-4b41-998c-178dca51e32a","Type":"ContainerStarted","Data":"5ed740f9595d5f55c1ece1ebf2f1977a287c5e574bc7edc0ba5d2791b99601c2"} Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.879907 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" event={"ID":"a70e904b-aaf4-4b41-998c-178dca51e32a","Type":"ContainerStarted","Data":"c0ac7faed3d8ff0f5009172b0fa4e9e06fa807679d82f556951b1df4b4a8b6cb"} Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.880093 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.883619 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" event={"ID":"7e0c9c4f-1216-499b-a1dd-be2f225cb97f","Type":"ContainerDied","Data":"fe3b848f7fe53c06f5adbf2122fa10ce4d42c7769bd30cac5abc5d4c1d8e5b5d"} Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.883731 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-p8dgq" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.898795 5112 scope.go:117] "RemoveContainer" containerID="3f8b95e90c456d5575829342acae5ef665f0c95e88f2e8e46d21e35baa84de6a" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.903409 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" podStartSLOduration=0.903383582 podStartE2EDuration="903.383582ms" podCreationTimestamp="2025-12-08 17:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:10.902365145 +0000 UTC m=+287.911913846" watchObservedRunningTime="2025-12-08 17:45:10.903383582 +0000 UTC m=+287.912932303" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.925047 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" podStartSLOduration=1.925030176 podStartE2EDuration="1.925030176s" podCreationTimestamp="2025-12-08 17:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:10.918554031 +0000 UTC m=+287.928102752" watchObservedRunningTime="2025-12-08 17:45:10.925030176 +0000 UTC m=+287.934578897" Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.945722 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt"] Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.949702 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-k5crt"] Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.954312 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-p8dgq"] Dec 08 17:45:10 crc kubenswrapper[5112]: I1208 17:45:10.958619 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-p8dgq"] Dec 08 17:45:11 crc kubenswrapper[5112]: I1208 17:45:11.322486 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e0c9c4f-1216-499b-a1dd-be2f225cb97f" path="/var/lib/kubelet/pods/7e0c9c4f-1216-499b-a1dd-be2f225cb97f/volumes" Dec 08 17:45:11 crc kubenswrapper[5112]: I1208 17:45:11.323396 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af065ece-a0e6-49a0-ba5e-21875f49cbd2" path="/var/lib/kubelet/pods/af065ece-a0e6-49a0-ba5e-21875f49cbd2/volumes" Dec 08 17:45:11 crc kubenswrapper[5112]: I1208 17:45:11.399039 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:11 crc kubenswrapper[5112]: I1208 17:45:11.706345 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:45:11 crc kubenswrapper[5112]: I1208 17:45:11.706418 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:45:11 crc kubenswrapper[5112]: I1208 17:45:11.706461 5112 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:45:11 crc kubenswrapper[5112]: I1208 17:45:11.707119 5112 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153"} pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:45:11 crc kubenswrapper[5112]: I1208 17:45:11.707179 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" containerID="cri-o://06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153" gracePeriod=600 Dec 08 17:45:11 crc kubenswrapper[5112]: I1208 17:45:11.850030 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:45:11 crc kubenswrapper[5112]: I1208 17:45:11.900057 5112 generic.go:358] "Generic (PLEG): container finished" podID="95e46da0-94bb-4d22-804b-b3018984cdac" containerID="06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153" exitCode=0 Dec 08 17:45:11 crc kubenswrapper[5112]: I1208 17:45:11.900130 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" event={"ID":"95e46da0-94bb-4d22-804b-b3018984cdac","Type":"ContainerDied","Data":"06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153"} Dec 08 17:45:12 crc kubenswrapper[5112]: I1208 17:45:12.910222 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" event={"ID":"95e46da0-94bb-4d22-804b-b3018984cdac","Type":"ContainerStarted","Data":"2e997f82b6ef61dbbf8fb6c80ff4306b0d7fbb9d6ce22b1cf0188311756dd12e"} Dec 08 17:45:23 crc kubenswrapper[5112]: I1208 17:45:23.463549 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:45:23 crc kubenswrapper[5112]: I1208 17:45:23.464517 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:45:24 crc kubenswrapper[5112]: I1208 17:45:24.574961 5112 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 17:45:29 crc kubenswrapper[5112]: I1208 17:45:29.586723 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-s9jc5"] Dec 08 17:45:29 crc kubenswrapper[5112]: I1208 17:45:29.587725 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" podUID="a70e904b-aaf4-4b41-998c-178dca51e32a" containerName="controller-manager" containerID="cri-o://5ed740f9595d5f55c1ece1ebf2f1977a287c5e574bc7edc0ba5d2791b99601c2" gracePeriod=30 Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.006347 5112 generic.go:358] "Generic (PLEG): container finished" podID="a70e904b-aaf4-4b41-998c-178dca51e32a" containerID="5ed740f9595d5f55c1ece1ebf2f1977a287c5e574bc7edc0ba5d2791b99601c2" exitCode=0 Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.006431 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" event={"ID":"a70e904b-aaf4-4b41-998c-178dca51e32a","Type":"ContainerDied","Data":"5ed740f9595d5f55c1ece1ebf2f1977a287c5e574bc7edc0ba5d2791b99601c2"} Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.316396 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.336015 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79565f44cc-2pnj9"] Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.336607 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a70e904b-aaf4-4b41-998c-178dca51e32a" containerName="controller-manager" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.336626 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a70e904b-aaf4-4b41-998c-178dca51e32a" containerName="controller-manager" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.336710 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="a70e904b-aaf4-4b41-998c-178dca51e32a" containerName="controller-manager" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.344101 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.350026 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79565f44cc-2pnj9"] Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.487616 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-client-ca\") pod \"a70e904b-aaf4-4b41-998c-178dca51e32a\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.487952 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a70e904b-aaf4-4b41-998c-178dca51e32a-serving-cert\") pod \"a70e904b-aaf4-4b41-998c-178dca51e32a\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.488041 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-proxy-ca-bundles\") pod \"a70e904b-aaf4-4b41-998c-178dca51e32a\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.488218 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db7gn\" (UniqueName: \"kubernetes.io/projected/a70e904b-aaf4-4b41-998c-178dca51e32a-kube-api-access-db7gn\") pod \"a70e904b-aaf4-4b41-998c-178dca51e32a\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.488315 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-config\") pod \"a70e904b-aaf4-4b41-998c-178dca51e32a\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.488403 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a70e904b-aaf4-4b41-998c-178dca51e32a-tmp\") pod \"a70e904b-aaf4-4b41-998c-178dca51e32a\" (UID: \"a70e904b-aaf4-4b41-998c-178dca51e32a\") " Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.488845 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a70e904b-aaf4-4b41-998c-178dca51e32a-tmp" (OuterVolumeSpecName: "tmp") pod "a70e904b-aaf4-4b41-998c-178dca51e32a" (UID: "a70e904b-aaf4-4b41-998c-178dca51e32a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.488981 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a70e904b-aaf4-4b41-998c-178dca51e32a" (UID: "a70e904b-aaf4-4b41-998c-178dca51e32a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.489139 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-client-ca" (OuterVolumeSpecName: "client-ca") pod "a70e904b-aaf4-4b41-998c-178dca51e32a" (UID: "a70e904b-aaf4-4b41-998c-178dca51e32a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.489282 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zscsr\" (UniqueName: \"kubernetes.io/projected/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-kube-api-access-zscsr\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.489982 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-config" (OuterVolumeSpecName: "config") pod "a70e904b-aaf4-4b41-998c-178dca51e32a" (UID: "a70e904b-aaf4-4b41-998c-178dca51e32a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.490419 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-client-ca\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.490553 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-proxy-ca-bundles\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.490639 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-tmp\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.490768 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-serving-cert\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.490818 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-config\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.490920 5112 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.490943 5112 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.490961 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a70e904b-aaf4-4b41-998c-178dca51e32a-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.490978 5112 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a70e904b-aaf4-4b41-998c-178dca51e32a-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.500402 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a70e904b-aaf4-4b41-998c-178dca51e32a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a70e904b-aaf4-4b41-998c-178dca51e32a" (UID: "a70e904b-aaf4-4b41-998c-178dca51e32a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.500466 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a70e904b-aaf4-4b41-998c-178dca51e32a-kube-api-access-db7gn" (OuterVolumeSpecName: "kube-api-access-db7gn") pod "a70e904b-aaf4-4b41-998c-178dca51e32a" (UID: "a70e904b-aaf4-4b41-998c-178dca51e32a"). InnerVolumeSpecName "kube-api-access-db7gn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.593026 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-client-ca\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.593097 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-proxy-ca-bundles\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.593127 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-tmp\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.593158 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-serving-cert\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.593177 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-config\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.593233 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zscsr\" (UniqueName: \"kubernetes.io/projected/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-kube-api-access-zscsr\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.593490 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-db7gn\" (UniqueName: \"kubernetes.io/projected/a70e904b-aaf4-4b41-998c-178dca51e32a-kube-api-access-db7gn\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.593506 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a70e904b-aaf4-4b41-998c-178dca51e32a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.593737 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-tmp\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.594265 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-client-ca\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.594833 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-config\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.595121 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-proxy-ca-bundles\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.597142 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-serving-cert\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.610465 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zscsr\" (UniqueName: \"kubernetes.io/projected/43ea16a3-01fc-409d-b1a4-00fa11c1ff21-kube-api-access-zscsr\") pod \"controller-manager-79565f44cc-2pnj9\" (UID: \"43ea16a3-01fc-409d-b1a4-00fa11c1ff21\") " pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.663073 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.822521 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79565f44cc-2pnj9"] Dec 08 17:45:30 crc kubenswrapper[5112]: I1208 17:45:30.832964 5112 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:45:31 crc kubenswrapper[5112]: I1208 17:45:31.014415 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" event={"ID":"43ea16a3-01fc-409d-b1a4-00fa11c1ff21","Type":"ContainerStarted","Data":"6a4248e1dd9d1abd120e583824aa67a2295b896402e1a6dea9d667f8fb4cb52d"} Dec 08 17:45:31 crc kubenswrapper[5112]: I1208 17:45:31.014723 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" event={"ID":"43ea16a3-01fc-409d-b1a4-00fa11c1ff21","Type":"ContainerStarted","Data":"acc542e98c534e3c1892bcc2ff7afb4114f153f373ddd53da5ed4027030437cd"} Dec 08 17:45:31 crc kubenswrapper[5112]: I1208 17:45:31.014740 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:31 crc kubenswrapper[5112]: I1208 17:45:31.016714 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" event={"ID":"a70e904b-aaf4-4b41-998c-178dca51e32a","Type":"ContainerDied","Data":"c0ac7faed3d8ff0f5009172b0fa4e9e06fa807679d82f556951b1df4b4a8b6cb"} Dec 08 17:45:31 crc kubenswrapper[5112]: I1208 17:45:31.016744 5112 scope.go:117] "RemoveContainer" containerID="5ed740f9595d5f55c1ece1ebf2f1977a287c5e574bc7edc0ba5d2791b99601c2" Dec 08 17:45:31 crc kubenswrapper[5112]: I1208 17:45:31.016791 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfd68485-s9jc5" Dec 08 17:45:31 crc kubenswrapper[5112]: I1208 17:45:31.032646 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" podStartSLOduration=2.032632041 podStartE2EDuration="2.032632041s" podCreationTimestamp="2025-12-08 17:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:31.031101989 +0000 UTC m=+308.040650720" watchObservedRunningTime="2025-12-08 17:45:31.032632041 +0000 UTC m=+308.042180742" Dec 08 17:45:31 crc kubenswrapper[5112]: I1208 17:45:31.052899 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-s9jc5"] Dec 08 17:45:31 crc kubenswrapper[5112]: I1208 17:45:31.056014 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dfd68485-s9jc5"] Dec 08 17:45:31 crc kubenswrapper[5112]: I1208 17:45:31.247793 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79565f44cc-2pnj9" Dec 08 17:45:31 crc kubenswrapper[5112]: I1208 17:45:31.322744 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a70e904b-aaf4-4b41-998c-178dca51e32a" path="/var/lib/kubelet/pods/a70e904b-aaf4-4b41-998c-178dca51e32a/volumes" Dec 08 17:45:57 crc kubenswrapper[5112]: I1208 17:45:57.973373 5112 ???:1] "http: TLS handshake error from 192.168.126.11:55408: no serving certificate available for the kubelet" Dec 08 17:46:09 crc kubenswrapper[5112]: I1208 17:46:09.585472 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf"] Dec 08 17:46:09 crc kubenswrapper[5112]: I1208 17:46:09.586482 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" podUID="451ffa8a-a736-4537-9311-f86b0306c5a9" containerName="route-controller-manager" containerID="cri-o://7e3353120644186f1b650273588055252c74f9361625d77e3f33a68bdbc39076" gracePeriod=30 Dec 08 17:46:09 crc kubenswrapper[5112]: I1208 17:46:09.993923 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.034844 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp"] Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.035662 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="451ffa8a-a736-4537-9311-f86b0306c5a9" containerName="route-controller-manager" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.035693 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="451ffa8a-a736-4537-9311-f86b0306c5a9" containerName="route-controller-manager" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.035785 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="451ffa8a-a736-4537-9311-f86b0306c5a9" containerName="route-controller-manager" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.041627 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.051171 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp"] Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.147125 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkn7t\" (UniqueName: \"kubernetes.io/projected/451ffa8a-a736-4537-9311-f86b0306c5a9-kube-api-access-wkn7t\") pod \"451ffa8a-a736-4537-9311-f86b0306c5a9\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.147386 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/451ffa8a-a736-4537-9311-f86b0306c5a9-client-ca\") pod \"451ffa8a-a736-4537-9311-f86b0306c5a9\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.147460 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/451ffa8a-a736-4537-9311-f86b0306c5a9-tmp\") pod \"451ffa8a-a736-4537-9311-f86b0306c5a9\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.147509 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/451ffa8a-a736-4537-9311-f86b0306c5a9-config\") pod \"451ffa8a-a736-4537-9311-f86b0306c5a9\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.147568 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/451ffa8a-a736-4537-9311-f86b0306c5a9-serving-cert\") pod \"451ffa8a-a736-4537-9311-f86b0306c5a9\" (UID: \"451ffa8a-a736-4537-9311-f86b0306c5a9\") " Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.147735 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651e9451-a170-4902-a5fa-8667767efe6f-config\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.147778 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651e9451-a170-4902-a5fa-8667767efe6f-serving-cert\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.147801 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/651e9451-a170-4902-a5fa-8667767efe6f-tmp\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.147797 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/451ffa8a-a736-4537-9311-f86b0306c5a9-tmp" (OuterVolumeSpecName: "tmp") pod "451ffa8a-a736-4537-9311-f86b0306c5a9" (UID: "451ffa8a-a736-4537-9311-f86b0306c5a9"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.147863 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/651e9451-a170-4902-a5fa-8667767efe6f-client-ca\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.147934 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgcsv\" (UniqueName: \"kubernetes.io/projected/651e9451-a170-4902-a5fa-8667767efe6f-kube-api-access-jgcsv\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.148015 5112 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/451ffa8a-a736-4537-9311-f86b0306c5a9-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.148359 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/451ffa8a-a736-4537-9311-f86b0306c5a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "451ffa8a-a736-4537-9311-f86b0306c5a9" (UID: "451ffa8a-a736-4537-9311-f86b0306c5a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.148381 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/451ffa8a-a736-4537-9311-f86b0306c5a9-config" (OuterVolumeSpecName: "config") pod "451ffa8a-a736-4537-9311-f86b0306c5a9" (UID: "451ffa8a-a736-4537-9311-f86b0306c5a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.154241 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/451ffa8a-a736-4537-9311-f86b0306c5a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "451ffa8a-a736-4537-9311-f86b0306c5a9" (UID: "451ffa8a-a736-4537-9311-f86b0306c5a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.154262 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/451ffa8a-a736-4537-9311-f86b0306c5a9-kube-api-access-wkn7t" (OuterVolumeSpecName: "kube-api-access-wkn7t") pod "451ffa8a-a736-4537-9311-f86b0306c5a9" (UID: "451ffa8a-a736-4537-9311-f86b0306c5a9"). InnerVolumeSpecName "kube-api-access-wkn7t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.245559 5112 generic.go:358] "Generic (PLEG): container finished" podID="451ffa8a-a736-4537-9311-f86b0306c5a9" containerID="7e3353120644186f1b650273588055252c74f9361625d77e3f33a68bdbc39076" exitCode=0 Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.245669 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.245914 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" event={"ID":"451ffa8a-a736-4537-9311-f86b0306c5a9","Type":"ContainerDied","Data":"7e3353120644186f1b650273588055252c74f9361625d77e3f33a68bdbc39076"} Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.246024 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf" event={"ID":"451ffa8a-a736-4537-9311-f86b0306c5a9","Type":"ContainerDied","Data":"f39b13f6b4affbb5571cb62582f4767f79669e86cd1ad2842c408a37aca4d420"} Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.246103 5112 scope.go:117] "RemoveContainer" containerID="7e3353120644186f1b650273588055252c74f9361625d77e3f33a68bdbc39076" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.248838 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jgcsv\" (UniqueName: \"kubernetes.io/projected/651e9451-a170-4902-a5fa-8667767efe6f-kube-api-access-jgcsv\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.248903 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651e9451-a170-4902-a5fa-8667767efe6f-config\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.248951 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651e9451-a170-4902-a5fa-8667767efe6f-serving-cert\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.248980 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/651e9451-a170-4902-a5fa-8667767efe6f-tmp\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.249212 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/651e9451-a170-4902-a5fa-8667767efe6f-client-ca\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.249279 5112 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/451ffa8a-a736-4537-9311-f86b0306c5a9-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.249291 5112 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/451ffa8a-a736-4537-9311-f86b0306c5a9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.249302 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wkn7t\" (UniqueName: \"kubernetes.io/projected/451ffa8a-a736-4537-9311-f86b0306c5a9-kube-api-access-wkn7t\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.249313 5112 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/451ffa8a-a736-4537-9311-f86b0306c5a9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.250130 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/651e9451-a170-4902-a5fa-8667767efe6f-tmp\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.250301 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/651e9451-a170-4902-a5fa-8667767efe6f-client-ca\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.250889 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651e9451-a170-4902-a5fa-8667767efe6f-config\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.255463 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651e9451-a170-4902-a5fa-8667767efe6f-serving-cert\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.265937 5112 scope.go:117] "RemoveContainer" containerID="7e3353120644186f1b650273588055252c74f9361625d77e3f33a68bdbc39076" Dec 08 17:46:10 crc kubenswrapper[5112]: E1208 17:46:10.266553 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e3353120644186f1b650273588055252c74f9361625d77e3f33a68bdbc39076\": container with ID starting with 7e3353120644186f1b650273588055252c74f9361625d77e3f33a68bdbc39076 not found: ID does not exist" containerID="7e3353120644186f1b650273588055252c74f9361625d77e3f33a68bdbc39076" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.266603 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3353120644186f1b650273588055252c74f9361625d77e3f33a68bdbc39076"} err="failed to get container status \"7e3353120644186f1b650273588055252c74f9361625d77e3f33a68bdbc39076\": rpc error: code = NotFound desc = could not find container \"7e3353120644186f1b650273588055252c74f9361625d77e3f33a68bdbc39076\": container with ID starting with 7e3353120644186f1b650273588055252c74f9361625d77e3f33a68bdbc39076 not found: ID does not exist" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.271975 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgcsv\" (UniqueName: \"kubernetes.io/projected/651e9451-a170-4902-a5fa-8667767efe6f-kube-api-access-jgcsv\") pod \"route-controller-manager-646c6f99f4-9qtmp\" (UID: \"651e9451-a170-4902-a5fa-8667767efe6f\") " pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.272045 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf"] Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.275694 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6c48458c-8mgvf"] Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.367016 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:10 crc kubenswrapper[5112]: I1208 17:46:10.778336 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp"] Dec 08 17:46:11 crc kubenswrapper[5112]: I1208 17:46:11.254827 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" event={"ID":"651e9451-a170-4902-a5fa-8667767efe6f","Type":"ContainerStarted","Data":"128d0538860b9709cd0f781943764ec46b3b9e2da056222989f04490a7814e11"} Dec 08 17:46:11 crc kubenswrapper[5112]: I1208 17:46:11.255147 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" event={"ID":"651e9451-a170-4902-a5fa-8667767efe6f","Type":"ContainerStarted","Data":"d70e1a2721e607d1701a02aa13f72077935bf089c34cd38db4911deab49d3b42"} Dec 08 17:46:11 crc kubenswrapper[5112]: I1208 17:46:11.255333 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:11 crc kubenswrapper[5112]: I1208 17:46:11.284451 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" podStartSLOduration=2.284422247 podStartE2EDuration="2.284422247s" podCreationTimestamp="2025-12-08 17:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:46:11.279209756 +0000 UTC m=+348.288758497" watchObservedRunningTime="2025-12-08 17:46:11.284422247 +0000 UTC m=+348.293970988" Dec 08 17:46:11 crc kubenswrapper[5112]: I1208 17:46:11.327184 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="451ffa8a-a736-4537-9311-f86b0306c5a9" path="/var/lib/kubelet/pods/451ffa8a-a736-4537-9311-f86b0306c5a9/volumes" Dec 08 17:46:11 crc kubenswrapper[5112]: I1208 17:46:11.743018 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-646c6f99f4-9qtmp" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.265523 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zngdv"] Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.266273 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zngdv" podUID="ea80841c-bb81-4bd4-a6b4-dde2e04b9351" containerName="registry-server" containerID="cri-o://57f9cc5a2d006bcbdadf8fc7757278396c8d920f679b5dc9b68f70b6f633515f" gracePeriod=30 Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.270638 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4flg"] Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.271132 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f4flg" podUID="a8b663e6-709e-4802-8101-44c949911229" containerName="registry-server" containerID="cri-o://42d38998a7c0c716b825c64305b20e22c7508abdbbabc644657925d9a971a778" gracePeriod=30 Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.275789 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-v5t7z"] Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.276068 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" podUID="02b6f45a-2d25-4712-b127-c1906f6fb154" containerName="marketplace-operator" containerID="cri-o://856cc3996b9f4ef5ce4e91fe941959c716acef43452d73e51122c52c4b10dd1c" gracePeriod=30 Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.281099 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phq66"] Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.281376 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-phq66" podUID="a4a649bd-963b-42eb-8283-2f6d98b54ef8" containerName="registry-server" containerID="cri-o://9adef01c0784afb97c60f66a2ea1fd723f38749e4d46b875452f46ccfbc0a0cb" gracePeriod=30 Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.293221 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-2s645"] Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.363832 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4p756"] Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.363979 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.364017 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-2s645"] Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.364698 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4p756" podUID="36b34f0a-51c8-41d9-a61c-dbc0104bea5d" containerName="registry-server" containerID="cri-o://5bb55f30b0992808f92348dca60115c5aa9bbe4ad80da51e1dc8268c34faec77" gracePeriod=30 Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.456073 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4542073e-f645-4ee6-b28c-56f4c273e9ea-tmp\") pod \"marketplace-operator-547dbd544d-2s645\" (UID: \"4542073e-f645-4ee6-b28c-56f4c273e9ea\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.456191 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4542073e-f645-4ee6-b28c-56f4c273e9ea-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-2s645\" (UID: \"4542073e-f645-4ee6-b28c-56f4c273e9ea\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.456262 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4542073e-f645-4ee6-b28c-56f4c273e9ea-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-2s645\" (UID: \"4542073e-f645-4ee6-b28c-56f4c273e9ea\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.456307 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7hf6\" (UniqueName: \"kubernetes.io/projected/4542073e-f645-4ee6-b28c-56f4c273e9ea-kube-api-access-k7hf6\") pod \"marketplace-operator-547dbd544d-2s645\" (UID: \"4542073e-f645-4ee6-b28c-56f4c273e9ea\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.557284 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4542073e-f645-4ee6-b28c-56f4c273e9ea-tmp\") pod \"marketplace-operator-547dbd544d-2s645\" (UID: \"4542073e-f645-4ee6-b28c-56f4c273e9ea\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.557353 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4542073e-f645-4ee6-b28c-56f4c273e9ea-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-2s645\" (UID: \"4542073e-f645-4ee6-b28c-56f4c273e9ea\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.557400 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4542073e-f645-4ee6-b28c-56f4c273e9ea-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-2s645\" (UID: \"4542073e-f645-4ee6-b28c-56f4c273e9ea\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.557441 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k7hf6\" (UniqueName: \"kubernetes.io/projected/4542073e-f645-4ee6-b28c-56f4c273e9ea-kube-api-access-k7hf6\") pod \"marketplace-operator-547dbd544d-2s645\" (UID: \"4542073e-f645-4ee6-b28c-56f4c273e9ea\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.558000 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4542073e-f645-4ee6-b28c-56f4c273e9ea-tmp\") pod \"marketplace-operator-547dbd544d-2s645\" (UID: \"4542073e-f645-4ee6-b28c-56f4c273e9ea\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.558864 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4542073e-f645-4ee6-b28c-56f4c273e9ea-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-2s645\" (UID: \"4542073e-f645-4ee6-b28c-56f4c273e9ea\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.565431 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4542073e-f645-4ee6-b28c-56f4c273e9ea-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-2s645\" (UID: \"4542073e-f645-4ee6-b28c-56f4c273e9ea\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.576337 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7hf6\" (UniqueName: \"kubernetes.io/projected/4542073e-f645-4ee6-b28c-56f4c273e9ea-kube-api-access-k7hf6\") pod \"marketplace-operator-547dbd544d-2s645\" (UID: \"4542073e-f645-4ee6-b28c-56f4c273e9ea\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.752813 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.756301 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.793479 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.804392 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.805525 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.834024 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.862587 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a649bd-963b-42eb-8283-2f6d98b54ef8-catalog-content\") pod \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\" (UID: \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.863582 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a649bd-963b-42eb-8283-2f6d98b54ef8-utilities\") pod \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\" (UID: \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.863736 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99gqn\" (UniqueName: \"kubernetes.io/projected/a4a649bd-963b-42eb-8283-2f6d98b54ef8-kube-api-access-99gqn\") pod \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\" (UID: \"a4a649bd-963b-42eb-8283-2f6d98b54ef8\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.865041 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4a649bd-963b-42eb-8283-2f6d98b54ef8-utilities" (OuterVolumeSpecName: "utilities") pod "a4a649bd-963b-42eb-8283-2f6d98b54ef8" (UID: "a4a649bd-963b-42eb-8283-2f6d98b54ef8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.874374 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4a649bd-963b-42eb-8283-2f6d98b54ef8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4a649bd-963b-42eb-8283-2f6d98b54ef8" (UID: "a4a649bd-963b-42eb-8283-2f6d98b54ef8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.875219 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a649bd-963b-42eb-8283-2f6d98b54ef8-kube-api-access-99gqn" (OuterVolumeSpecName: "kube-api-access-99gqn") pod "a4a649bd-963b-42eb-8283-2f6d98b54ef8" (UID: "a4a649bd-963b-42eb-8283-2f6d98b54ef8"). InnerVolumeSpecName "kube-api-access-99gqn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.965903 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-catalog-content\") pod \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\" (UID: \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.966318 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b663e6-709e-4802-8101-44c949911229-utilities\") pod \"a8b663e6-709e-4802-8101-44c949911229\" (UID: \"a8b663e6-709e-4802-8101-44c949911229\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.966346 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b663e6-709e-4802-8101-44c949911229-catalog-content\") pod \"a8b663e6-709e-4802-8101-44c949911229\" (UID: \"a8b663e6-709e-4802-8101-44c949911229\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.966430 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gncnd\" (UniqueName: \"kubernetes.io/projected/02b6f45a-2d25-4712-b127-c1906f6fb154-kube-api-access-gncnd\") pod \"02b6f45a-2d25-4712-b127-c1906f6fb154\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.966455 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02b6f45a-2d25-4712-b127-c1906f6fb154-tmp\") pod \"02b6f45a-2d25-4712-b127-c1906f6fb154\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.966484 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-utilities\") pod \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\" (UID: \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.966513 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr6xv\" (UniqueName: \"kubernetes.io/projected/a8b663e6-709e-4802-8101-44c949911229-kube-api-access-rr6xv\") pod \"a8b663e6-709e-4802-8101-44c949911229\" (UID: \"a8b663e6-709e-4802-8101-44c949911229\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.966537 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/02b6f45a-2d25-4712-b127-c1906f6fb154-marketplace-operator-metrics\") pod \"02b6f45a-2d25-4712-b127-c1906f6fb154\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.966557 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxbnf\" (UniqueName: \"kubernetes.io/projected/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-kube-api-access-mxbnf\") pod \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\" (UID: \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.966608 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-utilities\") pod \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\" (UID: \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.966659 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02b6f45a-2d25-4712-b127-c1906f6fb154-marketplace-trusted-ca\") pod \"02b6f45a-2d25-4712-b127-c1906f6fb154\" (UID: \"02b6f45a-2d25-4712-b127-c1906f6fb154\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.966724 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75srw\" (UniqueName: \"kubernetes.io/projected/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-kube-api-access-75srw\") pod \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\" (UID: \"ea80841c-bb81-4bd4-a6b4-dde2e04b9351\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.966761 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-catalog-content\") pod \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\" (UID: \"36b34f0a-51c8-41d9-a61c-dbc0104bea5d\") " Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.967215 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a649bd-963b-42eb-8283-2f6d98b54ef8-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.967239 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99gqn\" (UniqueName: \"kubernetes.io/projected/a4a649bd-963b-42eb-8283-2f6d98b54ef8-kube-api-access-99gqn\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.967251 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a649bd-963b-42eb-8283-2f6d98b54ef8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.968353 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b6f45a-2d25-4712-b127-c1906f6fb154-tmp" (OuterVolumeSpecName: "tmp") pod "02b6f45a-2d25-4712-b127-c1906f6fb154" (UID: "02b6f45a-2d25-4712-b127-c1906f6fb154"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.968830 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b6f45a-2d25-4712-b127-c1906f6fb154-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "02b6f45a-2d25-4712-b127-c1906f6fb154" (UID: "02b6f45a-2d25-4712-b127-c1906f6fb154"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.969019 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-utilities" (OuterVolumeSpecName: "utilities") pod "36b34f0a-51c8-41d9-a61c-dbc0104bea5d" (UID: "36b34f0a-51c8-41d9-a61c-dbc0104bea5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.969980 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b663e6-709e-4802-8101-44c949911229-utilities" (OuterVolumeSpecName: "utilities") pod "a8b663e6-709e-4802-8101-44c949911229" (UID: "a8b663e6-709e-4802-8101-44c949911229"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.970970 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-utilities" (OuterVolumeSpecName: "utilities") pod "ea80841c-bb81-4bd4-a6b4-dde2e04b9351" (UID: "ea80841c-bb81-4bd4-a6b4-dde2e04b9351"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.971645 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-kube-api-access-mxbnf" (OuterVolumeSpecName: "kube-api-access-mxbnf") pod "36b34f0a-51c8-41d9-a61c-dbc0104bea5d" (UID: "36b34f0a-51c8-41d9-a61c-dbc0104bea5d"). InnerVolumeSpecName "kube-api-access-mxbnf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.971708 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b6f45a-2d25-4712-b127-c1906f6fb154-kube-api-access-gncnd" (OuterVolumeSpecName: "kube-api-access-gncnd") pod "02b6f45a-2d25-4712-b127-c1906f6fb154" (UID: "02b6f45a-2d25-4712-b127-c1906f6fb154"). InnerVolumeSpecName "kube-api-access-gncnd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.972667 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-kube-api-access-75srw" (OuterVolumeSpecName: "kube-api-access-75srw") pod "ea80841c-bb81-4bd4-a6b4-dde2e04b9351" (UID: "ea80841c-bb81-4bd4-a6b4-dde2e04b9351"). InnerVolumeSpecName "kube-api-access-75srw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.972828 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b6f45a-2d25-4712-b127-c1906f6fb154-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "02b6f45a-2d25-4712-b127-c1906f6fb154" (UID: "02b6f45a-2d25-4712-b127-c1906f6fb154"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:46:18 crc kubenswrapper[5112]: I1208 17:46:18.973888 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b663e6-709e-4802-8101-44c949911229-kube-api-access-rr6xv" (OuterVolumeSpecName: "kube-api-access-rr6xv") pod "a8b663e6-709e-4802-8101-44c949911229" (UID: "a8b663e6-709e-4802-8101-44c949911229"). InnerVolumeSpecName "kube-api-access-rr6xv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.009488 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea80841c-bb81-4bd4-a6b4-dde2e04b9351" (UID: "ea80841c-bb81-4bd4-a6b4-dde2e04b9351"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.022310 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b663e6-709e-4802-8101-44c949911229-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8b663e6-709e-4802-8101-44c949911229" (UID: "a8b663e6-709e-4802-8101-44c949911229"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.058156 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36b34f0a-51c8-41d9-a61c-dbc0104bea5d" (UID: "36b34f0a-51c8-41d9-a61c-dbc0104bea5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.068221 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gncnd\" (UniqueName: \"kubernetes.io/projected/02b6f45a-2d25-4712-b127-c1906f6fb154-kube-api-access-gncnd\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.068242 5112 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/02b6f45a-2d25-4712-b127-c1906f6fb154-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.068252 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.068274 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rr6xv\" (UniqueName: \"kubernetes.io/projected/a8b663e6-709e-4802-8101-44c949911229-kube-api-access-rr6xv\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.068284 5112 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/02b6f45a-2d25-4712-b127-c1906f6fb154-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.068294 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mxbnf\" (UniqueName: \"kubernetes.io/projected/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-kube-api-access-mxbnf\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.068302 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.068310 5112 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02b6f45a-2d25-4712-b127-c1906f6fb154-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.068318 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-75srw\" (UniqueName: \"kubernetes.io/projected/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-kube-api-access-75srw\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.068326 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36b34f0a-51c8-41d9-a61c-dbc0104bea5d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.068334 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea80841c-bb81-4bd4-a6b4-dde2e04b9351-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.068341 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b663e6-709e-4802-8101-44c949911229-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.068348 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b663e6-709e-4802-8101-44c949911229-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.199041 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-2s645"] Dec 08 17:46:19 crc kubenswrapper[5112]: W1208 17:46:19.206690 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4542073e_f645_4ee6_b28c_56f4c273e9ea.slice/crio-0a3ad660002a5fc1c4ba4451777f33d1b05d657093385706a145bb14eff5a892 WatchSource:0}: Error finding container 0a3ad660002a5fc1c4ba4451777f33d1b05d657093385706a145bb14eff5a892: Status 404 returned error can't find the container with id 0a3ad660002a5fc1c4ba4451777f33d1b05d657093385706a145bb14eff5a892 Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.307961 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" event={"ID":"4542073e-f645-4ee6-b28c-56f4c273e9ea","Type":"ContainerStarted","Data":"0a3ad660002a5fc1c4ba4451777f33d1b05d657093385706a145bb14eff5a892"} Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.310754 5112 generic.go:358] "Generic (PLEG): container finished" podID="ea80841c-bb81-4bd4-a6b4-dde2e04b9351" containerID="57f9cc5a2d006bcbdadf8fc7757278396c8d920f679b5dc9b68f70b6f633515f" exitCode=0 Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.310852 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zngdv" event={"ID":"ea80841c-bb81-4bd4-a6b4-dde2e04b9351","Type":"ContainerDied","Data":"57f9cc5a2d006bcbdadf8fc7757278396c8d920f679b5dc9b68f70b6f633515f"} Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.310878 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zngdv" event={"ID":"ea80841c-bb81-4bd4-a6b4-dde2e04b9351","Type":"ContainerDied","Data":"5bfc8fededfc2f43caf2fa7eee69c1b39cf98023a765faa356f7ff1490bd52ff"} Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.310899 5112 scope.go:117] "RemoveContainer" containerID="57f9cc5a2d006bcbdadf8fc7757278396c8d920f679b5dc9b68f70b6f633515f" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.311051 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zngdv" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.312764 5112 generic.go:358] "Generic (PLEG): container finished" podID="02b6f45a-2d25-4712-b127-c1906f6fb154" containerID="856cc3996b9f4ef5ce4e91fe941959c716acef43452d73e51122c52c4b10dd1c" exitCode=0 Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.312884 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" event={"ID":"02b6f45a-2d25-4712-b127-c1906f6fb154","Type":"ContainerDied","Data":"856cc3996b9f4ef5ce4e91fe941959c716acef43452d73e51122c52c4b10dd1c"} Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.312939 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" event={"ID":"02b6f45a-2d25-4712-b127-c1906f6fb154","Type":"ContainerDied","Data":"2d795179d45ae0d614c95b9d280a479b2a99483901fc8f0a47ad6107a10cc79a"} Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.312992 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-v5t7z" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.315602 5112 generic.go:358] "Generic (PLEG): container finished" podID="a8b663e6-709e-4802-8101-44c949911229" containerID="42d38998a7c0c716b825c64305b20e22c7508abdbbabc644657925d9a971a778" exitCode=0 Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.315872 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4flg" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.321510 5112 generic.go:358] "Generic (PLEG): container finished" podID="36b34f0a-51c8-41d9-a61c-dbc0104bea5d" containerID="5bb55f30b0992808f92348dca60115c5aa9bbe4ad80da51e1dc8268c34faec77" exitCode=0 Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.321685 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4p756" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.323422 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4flg" event={"ID":"a8b663e6-709e-4802-8101-44c949911229","Type":"ContainerDied","Data":"42d38998a7c0c716b825c64305b20e22c7508abdbbabc644657925d9a971a778"} Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.323534 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4flg" event={"ID":"a8b663e6-709e-4802-8101-44c949911229","Type":"ContainerDied","Data":"417f318c3d99736782f5125ebcd793b4c11018d95debeb99e3d59e0368d966db"} Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.323612 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4p756" event={"ID":"36b34f0a-51c8-41d9-a61c-dbc0104bea5d","Type":"ContainerDied","Data":"5bb55f30b0992808f92348dca60115c5aa9bbe4ad80da51e1dc8268c34faec77"} Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.323680 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4p756" event={"ID":"36b34f0a-51c8-41d9-a61c-dbc0104bea5d","Type":"ContainerDied","Data":"795f631ba23772bde690f32401b15427628d71c86675a5e4e1b4e8e20f7c7dce"} Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.334821 5112 scope.go:117] "RemoveContainer" containerID="9107d51784bb4e353e9753278cfa059c660e0e3bc0273bdbe08024c231c96b3a" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.342661 5112 generic.go:358] "Generic (PLEG): container finished" podID="a4a649bd-963b-42eb-8283-2f6d98b54ef8" containerID="9adef01c0784afb97c60f66a2ea1fd723f38749e4d46b875452f46ccfbc0a0cb" exitCode=0 Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.342750 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-phq66" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.342755 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phq66" event={"ID":"a4a649bd-963b-42eb-8283-2f6d98b54ef8","Type":"ContainerDied","Data":"9adef01c0784afb97c60f66a2ea1fd723f38749e4d46b875452f46ccfbc0a0cb"} Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.343267 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-phq66" event={"ID":"a4a649bd-963b-42eb-8283-2f6d98b54ef8","Type":"ContainerDied","Data":"4ebd3131d186688d02e678708c94a952c67db5da0eecf94944c79d4491925ac4"} Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.376737 5112 scope.go:117] "RemoveContainer" containerID="9910f689a2a781ea8426914accd5ea5e140b675aaca4bbd3d68727b2bbfa7512" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.401116 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zngdv"] Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.411213 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zngdv"] Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.415343 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-phq66"] Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.420214 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-phq66"] Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.420615 5112 scope.go:117] "RemoveContainer" containerID="57f9cc5a2d006bcbdadf8fc7757278396c8d920f679b5dc9b68f70b6f633515f" Dec 08 17:46:19 crc kubenswrapper[5112]: E1208 17:46:19.421049 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f9cc5a2d006bcbdadf8fc7757278396c8d920f679b5dc9b68f70b6f633515f\": container with ID starting with 57f9cc5a2d006bcbdadf8fc7757278396c8d920f679b5dc9b68f70b6f633515f not found: ID does not exist" containerID="57f9cc5a2d006bcbdadf8fc7757278396c8d920f679b5dc9b68f70b6f633515f" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.421100 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f9cc5a2d006bcbdadf8fc7757278396c8d920f679b5dc9b68f70b6f633515f"} err="failed to get container status \"57f9cc5a2d006bcbdadf8fc7757278396c8d920f679b5dc9b68f70b6f633515f\": rpc error: code = NotFound desc = could not find container \"57f9cc5a2d006bcbdadf8fc7757278396c8d920f679b5dc9b68f70b6f633515f\": container with ID starting with 57f9cc5a2d006bcbdadf8fc7757278396c8d920f679b5dc9b68f70b6f633515f not found: ID does not exist" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.421122 5112 scope.go:117] "RemoveContainer" containerID="9107d51784bb4e353e9753278cfa059c660e0e3bc0273bdbe08024c231c96b3a" Dec 08 17:46:19 crc kubenswrapper[5112]: E1208 17:46:19.421323 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9107d51784bb4e353e9753278cfa059c660e0e3bc0273bdbe08024c231c96b3a\": container with ID starting with 9107d51784bb4e353e9753278cfa059c660e0e3bc0273bdbe08024c231c96b3a not found: ID does not exist" containerID="9107d51784bb4e353e9753278cfa059c660e0e3bc0273bdbe08024c231c96b3a" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.421338 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9107d51784bb4e353e9753278cfa059c660e0e3bc0273bdbe08024c231c96b3a"} err="failed to get container status \"9107d51784bb4e353e9753278cfa059c660e0e3bc0273bdbe08024c231c96b3a\": rpc error: code = NotFound desc = could not find container \"9107d51784bb4e353e9753278cfa059c660e0e3bc0273bdbe08024c231c96b3a\": container with ID starting with 9107d51784bb4e353e9753278cfa059c660e0e3bc0273bdbe08024c231c96b3a not found: ID does not exist" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.421354 5112 scope.go:117] "RemoveContainer" containerID="9910f689a2a781ea8426914accd5ea5e140b675aaca4bbd3d68727b2bbfa7512" Dec 08 17:46:19 crc kubenswrapper[5112]: E1208 17:46:19.421519 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9910f689a2a781ea8426914accd5ea5e140b675aaca4bbd3d68727b2bbfa7512\": container with ID starting with 9910f689a2a781ea8426914accd5ea5e140b675aaca4bbd3d68727b2bbfa7512 not found: ID does not exist" containerID="9910f689a2a781ea8426914accd5ea5e140b675aaca4bbd3d68727b2bbfa7512" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.421542 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9910f689a2a781ea8426914accd5ea5e140b675aaca4bbd3d68727b2bbfa7512"} err="failed to get container status \"9910f689a2a781ea8426914accd5ea5e140b675aaca4bbd3d68727b2bbfa7512\": rpc error: code = NotFound desc = could not find container \"9910f689a2a781ea8426914accd5ea5e140b675aaca4bbd3d68727b2bbfa7512\": container with ID starting with 9910f689a2a781ea8426914accd5ea5e140b675aaca4bbd3d68727b2bbfa7512 not found: ID does not exist" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.421558 5112 scope.go:117] "RemoveContainer" containerID="856cc3996b9f4ef5ce4e91fe941959c716acef43452d73e51122c52c4b10dd1c" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.442273 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-v5t7z"] Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.447266 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-v5t7z"] Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.451585 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4flg"] Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.455517 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f4flg"] Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.459654 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4p756"] Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.462635 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4p756"] Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.474672 5112 scope.go:117] "RemoveContainer" containerID="856cc3996b9f4ef5ce4e91fe941959c716acef43452d73e51122c52c4b10dd1c" Dec 08 17:46:19 crc kubenswrapper[5112]: E1208 17:46:19.475162 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856cc3996b9f4ef5ce4e91fe941959c716acef43452d73e51122c52c4b10dd1c\": container with ID starting with 856cc3996b9f4ef5ce4e91fe941959c716acef43452d73e51122c52c4b10dd1c not found: ID does not exist" containerID="856cc3996b9f4ef5ce4e91fe941959c716acef43452d73e51122c52c4b10dd1c" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.475203 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856cc3996b9f4ef5ce4e91fe941959c716acef43452d73e51122c52c4b10dd1c"} err="failed to get container status \"856cc3996b9f4ef5ce4e91fe941959c716acef43452d73e51122c52c4b10dd1c\": rpc error: code = NotFound desc = could not find container \"856cc3996b9f4ef5ce4e91fe941959c716acef43452d73e51122c52c4b10dd1c\": container with ID starting with 856cc3996b9f4ef5ce4e91fe941959c716acef43452d73e51122c52c4b10dd1c not found: ID does not exist" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.475222 5112 scope.go:117] "RemoveContainer" containerID="42d38998a7c0c716b825c64305b20e22c7508abdbbabc644657925d9a971a778" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.489052 5112 scope.go:117] "RemoveContainer" containerID="984bbfb75141638f8b9bc93942d54a4a539ca9e3b89a7dab90392d1ab0ed44a2" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.509261 5112 scope.go:117] "RemoveContainer" containerID="7c9900c3b6f20b146013624c84cb3858a4baa5a0c4899db3405e1cf9c530b88b" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.527571 5112 scope.go:117] "RemoveContainer" containerID="42d38998a7c0c716b825c64305b20e22c7508abdbbabc644657925d9a971a778" Dec 08 17:46:19 crc kubenswrapper[5112]: E1208 17:46:19.527978 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42d38998a7c0c716b825c64305b20e22c7508abdbbabc644657925d9a971a778\": container with ID starting with 42d38998a7c0c716b825c64305b20e22c7508abdbbabc644657925d9a971a778 not found: ID does not exist" containerID="42d38998a7c0c716b825c64305b20e22c7508abdbbabc644657925d9a971a778" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.528010 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42d38998a7c0c716b825c64305b20e22c7508abdbbabc644657925d9a971a778"} err="failed to get container status \"42d38998a7c0c716b825c64305b20e22c7508abdbbabc644657925d9a971a778\": rpc error: code = NotFound desc = could not find container \"42d38998a7c0c716b825c64305b20e22c7508abdbbabc644657925d9a971a778\": container with ID starting with 42d38998a7c0c716b825c64305b20e22c7508abdbbabc644657925d9a971a778 not found: ID does not exist" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.528030 5112 scope.go:117] "RemoveContainer" containerID="984bbfb75141638f8b9bc93942d54a4a539ca9e3b89a7dab90392d1ab0ed44a2" Dec 08 17:46:19 crc kubenswrapper[5112]: E1208 17:46:19.528251 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"984bbfb75141638f8b9bc93942d54a4a539ca9e3b89a7dab90392d1ab0ed44a2\": container with ID starting with 984bbfb75141638f8b9bc93942d54a4a539ca9e3b89a7dab90392d1ab0ed44a2 not found: ID does not exist" containerID="984bbfb75141638f8b9bc93942d54a4a539ca9e3b89a7dab90392d1ab0ed44a2" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.528282 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"984bbfb75141638f8b9bc93942d54a4a539ca9e3b89a7dab90392d1ab0ed44a2"} err="failed to get container status \"984bbfb75141638f8b9bc93942d54a4a539ca9e3b89a7dab90392d1ab0ed44a2\": rpc error: code = NotFound desc = could not find container \"984bbfb75141638f8b9bc93942d54a4a539ca9e3b89a7dab90392d1ab0ed44a2\": container with ID starting with 984bbfb75141638f8b9bc93942d54a4a539ca9e3b89a7dab90392d1ab0ed44a2 not found: ID does not exist" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.528301 5112 scope.go:117] "RemoveContainer" containerID="7c9900c3b6f20b146013624c84cb3858a4baa5a0c4899db3405e1cf9c530b88b" Dec 08 17:46:19 crc kubenswrapper[5112]: E1208 17:46:19.528497 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c9900c3b6f20b146013624c84cb3858a4baa5a0c4899db3405e1cf9c530b88b\": container with ID starting with 7c9900c3b6f20b146013624c84cb3858a4baa5a0c4899db3405e1cf9c530b88b not found: ID does not exist" containerID="7c9900c3b6f20b146013624c84cb3858a4baa5a0c4899db3405e1cf9c530b88b" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.528527 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9900c3b6f20b146013624c84cb3858a4baa5a0c4899db3405e1cf9c530b88b"} err="failed to get container status \"7c9900c3b6f20b146013624c84cb3858a4baa5a0c4899db3405e1cf9c530b88b\": rpc error: code = NotFound desc = could not find container \"7c9900c3b6f20b146013624c84cb3858a4baa5a0c4899db3405e1cf9c530b88b\": container with ID starting with 7c9900c3b6f20b146013624c84cb3858a4baa5a0c4899db3405e1cf9c530b88b not found: ID does not exist" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.528539 5112 scope.go:117] "RemoveContainer" containerID="5bb55f30b0992808f92348dca60115c5aa9bbe4ad80da51e1dc8268c34faec77" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.540425 5112 scope.go:117] "RemoveContainer" containerID="fe0065443d2203b8506a3a316725cd4302de3bc871f18d4f8c46b63fcd9c3ff7" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.551850 5112 scope.go:117] "RemoveContainer" containerID="51f86c3fa8e40b9d10281b824a2768a176626627cdeb402f0c46e768d4aedd3f" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.570973 5112 scope.go:117] "RemoveContainer" containerID="5bb55f30b0992808f92348dca60115c5aa9bbe4ad80da51e1dc8268c34faec77" Dec 08 17:46:19 crc kubenswrapper[5112]: E1208 17:46:19.571461 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bb55f30b0992808f92348dca60115c5aa9bbe4ad80da51e1dc8268c34faec77\": container with ID starting with 5bb55f30b0992808f92348dca60115c5aa9bbe4ad80da51e1dc8268c34faec77 not found: ID does not exist" containerID="5bb55f30b0992808f92348dca60115c5aa9bbe4ad80da51e1dc8268c34faec77" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.571495 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bb55f30b0992808f92348dca60115c5aa9bbe4ad80da51e1dc8268c34faec77"} err="failed to get container status \"5bb55f30b0992808f92348dca60115c5aa9bbe4ad80da51e1dc8268c34faec77\": rpc error: code = NotFound desc = could not find container \"5bb55f30b0992808f92348dca60115c5aa9bbe4ad80da51e1dc8268c34faec77\": container with ID starting with 5bb55f30b0992808f92348dca60115c5aa9bbe4ad80da51e1dc8268c34faec77 not found: ID does not exist" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.571527 5112 scope.go:117] "RemoveContainer" containerID="fe0065443d2203b8506a3a316725cd4302de3bc871f18d4f8c46b63fcd9c3ff7" Dec 08 17:46:19 crc kubenswrapper[5112]: E1208 17:46:19.571927 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe0065443d2203b8506a3a316725cd4302de3bc871f18d4f8c46b63fcd9c3ff7\": container with ID starting with fe0065443d2203b8506a3a316725cd4302de3bc871f18d4f8c46b63fcd9c3ff7 not found: ID does not exist" containerID="fe0065443d2203b8506a3a316725cd4302de3bc871f18d4f8c46b63fcd9c3ff7" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.571967 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe0065443d2203b8506a3a316725cd4302de3bc871f18d4f8c46b63fcd9c3ff7"} err="failed to get container status \"fe0065443d2203b8506a3a316725cd4302de3bc871f18d4f8c46b63fcd9c3ff7\": rpc error: code = NotFound desc = could not find container \"fe0065443d2203b8506a3a316725cd4302de3bc871f18d4f8c46b63fcd9c3ff7\": container with ID starting with fe0065443d2203b8506a3a316725cd4302de3bc871f18d4f8c46b63fcd9c3ff7 not found: ID does not exist" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.571996 5112 scope.go:117] "RemoveContainer" containerID="51f86c3fa8e40b9d10281b824a2768a176626627cdeb402f0c46e768d4aedd3f" Dec 08 17:46:19 crc kubenswrapper[5112]: E1208 17:46:19.572281 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51f86c3fa8e40b9d10281b824a2768a176626627cdeb402f0c46e768d4aedd3f\": container with ID starting with 51f86c3fa8e40b9d10281b824a2768a176626627cdeb402f0c46e768d4aedd3f not found: ID does not exist" containerID="51f86c3fa8e40b9d10281b824a2768a176626627cdeb402f0c46e768d4aedd3f" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.572307 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51f86c3fa8e40b9d10281b824a2768a176626627cdeb402f0c46e768d4aedd3f"} err="failed to get container status \"51f86c3fa8e40b9d10281b824a2768a176626627cdeb402f0c46e768d4aedd3f\": rpc error: code = NotFound desc = could not find container \"51f86c3fa8e40b9d10281b824a2768a176626627cdeb402f0c46e768d4aedd3f\": container with ID starting with 51f86c3fa8e40b9d10281b824a2768a176626627cdeb402f0c46e768d4aedd3f not found: ID does not exist" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.572321 5112 scope.go:117] "RemoveContainer" containerID="9adef01c0784afb97c60f66a2ea1fd723f38749e4d46b875452f46ccfbc0a0cb" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.593702 5112 scope.go:117] "RemoveContainer" containerID="da2708a442b70e7319fcdab69b796e15f30bffc79fc26d6fc4d2cfbc4a12e58c" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.608620 5112 scope.go:117] "RemoveContainer" containerID="179e63f3e82b818415472bc39abb3624b25eadae79e49f48e8a12d6067e8efe3" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.621761 5112 scope.go:117] "RemoveContainer" containerID="9adef01c0784afb97c60f66a2ea1fd723f38749e4d46b875452f46ccfbc0a0cb" Dec 08 17:46:19 crc kubenswrapper[5112]: E1208 17:46:19.622391 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9adef01c0784afb97c60f66a2ea1fd723f38749e4d46b875452f46ccfbc0a0cb\": container with ID starting with 9adef01c0784afb97c60f66a2ea1fd723f38749e4d46b875452f46ccfbc0a0cb not found: ID does not exist" containerID="9adef01c0784afb97c60f66a2ea1fd723f38749e4d46b875452f46ccfbc0a0cb" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.622437 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9adef01c0784afb97c60f66a2ea1fd723f38749e4d46b875452f46ccfbc0a0cb"} err="failed to get container status \"9adef01c0784afb97c60f66a2ea1fd723f38749e4d46b875452f46ccfbc0a0cb\": rpc error: code = NotFound desc = could not find container \"9adef01c0784afb97c60f66a2ea1fd723f38749e4d46b875452f46ccfbc0a0cb\": container with ID starting with 9adef01c0784afb97c60f66a2ea1fd723f38749e4d46b875452f46ccfbc0a0cb not found: ID does not exist" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.622462 5112 scope.go:117] "RemoveContainer" containerID="da2708a442b70e7319fcdab69b796e15f30bffc79fc26d6fc4d2cfbc4a12e58c" Dec 08 17:46:19 crc kubenswrapper[5112]: E1208 17:46:19.622941 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da2708a442b70e7319fcdab69b796e15f30bffc79fc26d6fc4d2cfbc4a12e58c\": container with ID starting with da2708a442b70e7319fcdab69b796e15f30bffc79fc26d6fc4d2cfbc4a12e58c not found: ID does not exist" containerID="da2708a442b70e7319fcdab69b796e15f30bffc79fc26d6fc4d2cfbc4a12e58c" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.622982 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da2708a442b70e7319fcdab69b796e15f30bffc79fc26d6fc4d2cfbc4a12e58c"} err="failed to get container status \"da2708a442b70e7319fcdab69b796e15f30bffc79fc26d6fc4d2cfbc4a12e58c\": rpc error: code = NotFound desc = could not find container \"da2708a442b70e7319fcdab69b796e15f30bffc79fc26d6fc4d2cfbc4a12e58c\": container with ID starting with da2708a442b70e7319fcdab69b796e15f30bffc79fc26d6fc4d2cfbc4a12e58c not found: ID does not exist" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.623011 5112 scope.go:117] "RemoveContainer" containerID="179e63f3e82b818415472bc39abb3624b25eadae79e49f48e8a12d6067e8efe3" Dec 08 17:46:19 crc kubenswrapper[5112]: E1208 17:46:19.623350 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179e63f3e82b818415472bc39abb3624b25eadae79e49f48e8a12d6067e8efe3\": container with ID starting with 179e63f3e82b818415472bc39abb3624b25eadae79e49f48e8a12d6067e8efe3 not found: ID does not exist" containerID="179e63f3e82b818415472bc39abb3624b25eadae79e49f48e8a12d6067e8efe3" Dec 08 17:46:19 crc kubenswrapper[5112]: I1208 17:46:19.623398 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179e63f3e82b818415472bc39abb3624b25eadae79e49f48e8a12d6067e8efe3"} err="failed to get container status \"179e63f3e82b818415472bc39abb3624b25eadae79e49f48e8a12d6067e8efe3\": rpc error: code = NotFound desc = could not find container \"179e63f3e82b818415472bc39abb3624b25eadae79e49f48e8a12d6067e8efe3\": container with ID starting with 179e63f3e82b818415472bc39abb3624b25eadae79e49f48e8a12d6067e8efe3 not found: ID does not exist" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.351818 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" event={"ID":"4542073e-f645-4ee6-b28c-56f4c273e9ea","Type":"ContainerStarted","Data":"2a73c34b9c90a6b363be58656c1c4326ed24d44c8f0425c51f3697f245b9e57f"} Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.351975 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.356354 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.368717 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-2s645" podStartSLOduration=2.36870125 podStartE2EDuration="2.36870125s" podCreationTimestamp="2025-12-08 17:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:46:20.3657423 +0000 UTC m=+357.375291011" watchObservedRunningTime="2025-12-08 17:46:20.36870125 +0000 UTC m=+357.378249951" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468173 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k5bmv"] Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468652 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4a649bd-963b-42eb-8283-2f6d98b54ef8" containerName="registry-server" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468669 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a649bd-963b-42eb-8283-2f6d98b54ef8" containerName="registry-server" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468678 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36b34f0a-51c8-41d9-a61c-dbc0104bea5d" containerName="registry-server" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468684 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b34f0a-51c8-41d9-a61c-dbc0104bea5d" containerName="registry-server" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468694 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea80841c-bb81-4bd4-a6b4-dde2e04b9351" containerName="extract-utilities" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468700 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea80841c-bb81-4bd4-a6b4-dde2e04b9351" containerName="extract-utilities" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468718 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a8b663e6-709e-4802-8101-44c949911229" containerName="extract-utilities" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468723 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b663e6-709e-4802-8101-44c949911229" containerName="extract-utilities" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468732 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02b6f45a-2d25-4712-b127-c1906f6fb154" containerName="marketplace-operator" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468737 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b6f45a-2d25-4712-b127-c1906f6fb154" containerName="marketplace-operator" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468746 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea80841c-bb81-4bd4-a6b4-dde2e04b9351" containerName="extract-content" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468751 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea80841c-bb81-4bd4-a6b4-dde2e04b9351" containerName="extract-content" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468758 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ea80841c-bb81-4bd4-a6b4-dde2e04b9351" containerName="registry-server" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468765 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea80841c-bb81-4bd4-a6b4-dde2e04b9351" containerName="registry-server" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468774 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36b34f0a-51c8-41d9-a61c-dbc0104bea5d" containerName="extract-utilities" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468778 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b34f0a-51c8-41d9-a61c-dbc0104bea5d" containerName="extract-utilities" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468784 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a8b663e6-709e-4802-8101-44c949911229" containerName="registry-server" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468789 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b663e6-709e-4802-8101-44c949911229" containerName="registry-server" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468795 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36b34f0a-51c8-41d9-a61c-dbc0104bea5d" containerName="extract-content" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468802 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="36b34f0a-51c8-41d9-a61c-dbc0104bea5d" containerName="extract-content" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468811 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a8b663e6-709e-4802-8101-44c949911229" containerName="extract-content" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468816 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b663e6-709e-4802-8101-44c949911229" containerName="extract-content" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468825 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4a649bd-963b-42eb-8283-2f6d98b54ef8" containerName="extract-utilities" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468830 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a649bd-963b-42eb-8283-2f6d98b54ef8" containerName="extract-utilities" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468837 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4a649bd-963b-42eb-8283-2f6d98b54ef8" containerName="extract-content" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468842 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a649bd-963b-42eb-8283-2f6d98b54ef8" containerName="extract-content" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468916 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4a649bd-963b-42eb-8283-2f6d98b54ef8" containerName="registry-server" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468924 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="ea80841c-bb81-4bd4-a6b4-dde2e04b9351" containerName="registry-server" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468936 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="36b34f0a-51c8-41d9-a61c-dbc0104bea5d" containerName="registry-server" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468945 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="02b6f45a-2d25-4712-b127-c1906f6fb154" containerName="marketplace-operator" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.468952 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="a8b663e6-709e-4802-8101-44c949911229" containerName="registry-server" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.485522 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.487873 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5bmv"] Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.488349 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.585432 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9670a33c-0814-4c92-9bf2-8eff61da9fb7-catalog-content\") pod \"redhat-marketplace-k5bmv\" (UID: \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\") " pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.585494 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9670a33c-0814-4c92-9bf2-8eff61da9fb7-utilities\") pod \"redhat-marketplace-k5bmv\" (UID: \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\") " pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.585597 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bphps\" (UniqueName: \"kubernetes.io/projected/9670a33c-0814-4c92-9bf2-8eff61da9fb7-kube-api-access-bphps\") pod \"redhat-marketplace-k5bmv\" (UID: \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\") " pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.675851 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ln45n"] Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.682096 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ln45n"] Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.682215 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.684824 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.686725 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9670a33c-0814-4c92-9bf2-8eff61da9fb7-utilities\") pod \"redhat-marketplace-k5bmv\" (UID: \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\") " pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.686792 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca-utilities\") pod \"community-operators-ln45n\" (UID: \"fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca\") " pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.686823 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bphps\" (UniqueName: \"kubernetes.io/projected/9670a33c-0814-4c92-9bf2-8eff61da9fb7-kube-api-access-bphps\") pod \"redhat-marketplace-k5bmv\" (UID: \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\") " pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.686900 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cptjh\" (UniqueName: \"kubernetes.io/projected/fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca-kube-api-access-cptjh\") pod \"community-operators-ln45n\" (UID: \"fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca\") " pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.686950 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9670a33c-0814-4c92-9bf2-8eff61da9fb7-catalog-content\") pod \"redhat-marketplace-k5bmv\" (UID: \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\") " pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.686982 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca-catalog-content\") pod \"community-operators-ln45n\" (UID: \"fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca\") " pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.687248 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9670a33c-0814-4c92-9bf2-8eff61da9fb7-utilities\") pod \"redhat-marketplace-k5bmv\" (UID: \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\") " pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.687686 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9670a33c-0814-4c92-9bf2-8eff61da9fb7-catalog-content\") pod \"redhat-marketplace-k5bmv\" (UID: \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\") " pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.708772 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bphps\" (UniqueName: \"kubernetes.io/projected/9670a33c-0814-4c92-9bf2-8eff61da9fb7-kube-api-access-bphps\") pod \"redhat-marketplace-k5bmv\" (UID: \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\") " pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.788042 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca-catalog-content\") pod \"community-operators-ln45n\" (UID: \"fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca\") " pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.788389 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca-utilities\") pod \"community-operators-ln45n\" (UID: \"fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca\") " pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.788451 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cptjh\" (UniqueName: \"kubernetes.io/projected/fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca-kube-api-access-cptjh\") pod \"community-operators-ln45n\" (UID: \"fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca\") " pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.788649 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca-catalog-content\") pod \"community-operators-ln45n\" (UID: \"fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca\") " pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.789182 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca-utilities\") pod \"community-operators-ln45n\" (UID: \"fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca\") " pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.808456 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cptjh\" (UniqueName: \"kubernetes.io/projected/fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca-kube-api-access-cptjh\") pod \"community-operators-ln45n\" (UID: \"fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca\") " pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:20 crc kubenswrapper[5112]: I1208 17:46:20.814967 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:21 crc kubenswrapper[5112]: I1208 17:46:21.004912 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:21 crc kubenswrapper[5112]: I1208 17:46:21.211291 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5bmv"] Dec 08 17:46:21 crc kubenswrapper[5112]: W1208 17:46:21.215092 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9670a33c_0814_4c92_9bf2_8eff61da9fb7.slice/crio-c74e5b8e4259047e5d96f9fb137723d80900734526f4dbc9707b6378b41dcb9a WatchSource:0}: Error finding container c74e5b8e4259047e5d96f9fb137723d80900734526f4dbc9707b6378b41dcb9a: Status 404 returned error can't find the container with id c74e5b8e4259047e5d96f9fb137723d80900734526f4dbc9707b6378b41dcb9a Dec 08 17:46:21 crc kubenswrapper[5112]: I1208 17:46:21.323920 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b6f45a-2d25-4712-b127-c1906f6fb154" path="/var/lib/kubelet/pods/02b6f45a-2d25-4712-b127-c1906f6fb154/volumes" Dec 08 17:46:21 crc kubenswrapper[5112]: I1208 17:46:21.324683 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36b34f0a-51c8-41d9-a61c-dbc0104bea5d" path="/var/lib/kubelet/pods/36b34f0a-51c8-41d9-a61c-dbc0104bea5d/volumes" Dec 08 17:46:21 crc kubenswrapper[5112]: I1208 17:46:21.325376 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a649bd-963b-42eb-8283-2f6d98b54ef8" path="/var/lib/kubelet/pods/a4a649bd-963b-42eb-8283-2f6d98b54ef8/volumes" Dec 08 17:46:21 crc kubenswrapper[5112]: I1208 17:46:21.326634 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b663e6-709e-4802-8101-44c949911229" path="/var/lib/kubelet/pods/a8b663e6-709e-4802-8101-44c949911229/volumes" Dec 08 17:46:21 crc kubenswrapper[5112]: I1208 17:46:21.327302 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea80841c-bb81-4bd4-a6b4-dde2e04b9351" path="/var/lib/kubelet/pods/ea80841c-bb81-4bd4-a6b4-dde2e04b9351/volumes" Dec 08 17:46:21 crc kubenswrapper[5112]: I1208 17:46:21.361680 5112 generic.go:358] "Generic (PLEG): container finished" podID="9670a33c-0814-4c92-9bf2-8eff61da9fb7" containerID="a6f335ce8f1c3421eda08f43d3bdab1c5e496be3b4657588e665b68a8cfc3b57" exitCode=0 Dec 08 17:46:21 crc kubenswrapper[5112]: I1208 17:46:21.361765 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5bmv" event={"ID":"9670a33c-0814-4c92-9bf2-8eff61da9fb7","Type":"ContainerDied","Data":"a6f335ce8f1c3421eda08f43d3bdab1c5e496be3b4657588e665b68a8cfc3b57"} Dec 08 17:46:21 crc kubenswrapper[5112]: I1208 17:46:21.361834 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5bmv" event={"ID":"9670a33c-0814-4c92-9bf2-8eff61da9fb7","Type":"ContainerStarted","Data":"c74e5b8e4259047e5d96f9fb137723d80900734526f4dbc9707b6378b41dcb9a"} Dec 08 17:46:21 crc kubenswrapper[5112]: I1208 17:46:21.392318 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ln45n"] Dec 08 17:46:21 crc kubenswrapper[5112]: W1208 17:46:21.408546 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc5f8270_78f1_4ca1_9be5_8b3c09ebb4ca.slice/crio-c5295a4662a978b8b138012d5a3d327dce93dcb21ce3385d5960929d7bf54f5b WatchSource:0}: Error finding container c5295a4662a978b8b138012d5a3d327dce93dcb21ce3385d5960929d7bf54f5b: Status 404 returned error can't find the container with id c5295a4662a978b8b138012d5a3d327dce93dcb21ce3385d5960929d7bf54f5b Dec 08 17:46:22 crc kubenswrapper[5112]: I1208 17:46:22.368332 5112 generic.go:358] "Generic (PLEG): container finished" podID="9670a33c-0814-4c92-9bf2-8eff61da9fb7" containerID="fa1cad752dbabb15bd5ebc07f99b03f6b5b6a2fb61b721c2df98f115183bbb96" exitCode=0 Dec 08 17:46:22 crc kubenswrapper[5112]: I1208 17:46:22.368409 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5bmv" event={"ID":"9670a33c-0814-4c92-9bf2-8eff61da9fb7","Type":"ContainerDied","Data":"fa1cad752dbabb15bd5ebc07f99b03f6b5b6a2fb61b721c2df98f115183bbb96"} Dec 08 17:46:22 crc kubenswrapper[5112]: I1208 17:46:22.370269 5112 generic.go:358] "Generic (PLEG): container finished" podID="fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca" containerID="7f36c5b37861284049a1923466c51d3d18429d8aef566b45d3235d7b22b1b440" exitCode=0 Dec 08 17:46:22 crc kubenswrapper[5112]: I1208 17:46:22.370311 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln45n" event={"ID":"fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca","Type":"ContainerDied","Data":"7f36c5b37861284049a1923466c51d3d18429d8aef566b45d3235d7b22b1b440"} Dec 08 17:46:22 crc kubenswrapper[5112]: I1208 17:46:22.370374 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln45n" event={"ID":"fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca","Type":"ContainerStarted","Data":"c5295a4662a978b8b138012d5a3d327dce93dcb21ce3385d5960929d7bf54f5b"} Dec 08 17:46:22 crc kubenswrapper[5112]: I1208 17:46:22.869432 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b6l2f"] Dec 08 17:46:22 crc kubenswrapper[5112]: I1208 17:46:22.874731 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:22 crc kubenswrapper[5112]: I1208 17:46:22.876959 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 17:46:22 crc kubenswrapper[5112]: I1208 17:46:22.883307 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6l2f"] Dec 08 17:46:22 crc kubenswrapper[5112]: I1208 17:46:22.913227 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7846011-283b-425d-834f-785b2256c0ed-catalog-content\") pod \"certified-operators-b6l2f\" (UID: \"f7846011-283b-425d-834f-785b2256c0ed\") " pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:22 crc kubenswrapper[5112]: I1208 17:46:22.913285 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5lzh\" (UniqueName: \"kubernetes.io/projected/f7846011-283b-425d-834f-785b2256c0ed-kube-api-access-h5lzh\") pod \"certified-operators-b6l2f\" (UID: \"f7846011-283b-425d-834f-785b2256c0ed\") " pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:22 crc kubenswrapper[5112]: I1208 17:46:22.913308 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7846011-283b-425d-834f-785b2256c0ed-utilities\") pod \"certified-operators-b6l2f\" (UID: \"f7846011-283b-425d-834f-785b2256c0ed\") " pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.014827 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7846011-283b-425d-834f-785b2256c0ed-catalog-content\") pod \"certified-operators-b6l2f\" (UID: \"f7846011-283b-425d-834f-785b2256c0ed\") " pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.014890 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h5lzh\" (UniqueName: \"kubernetes.io/projected/f7846011-283b-425d-834f-785b2256c0ed-kube-api-access-h5lzh\") pod \"certified-operators-b6l2f\" (UID: \"f7846011-283b-425d-834f-785b2256c0ed\") " pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.014914 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7846011-283b-425d-834f-785b2256c0ed-utilities\") pod \"certified-operators-b6l2f\" (UID: \"f7846011-283b-425d-834f-785b2256c0ed\") " pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.015309 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7846011-283b-425d-834f-785b2256c0ed-catalog-content\") pod \"certified-operators-b6l2f\" (UID: \"f7846011-283b-425d-834f-785b2256c0ed\") " pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.015471 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7846011-283b-425d-834f-785b2256c0ed-utilities\") pod \"certified-operators-b6l2f\" (UID: \"f7846011-283b-425d-834f-785b2256c0ed\") " pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.037177 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5lzh\" (UniqueName: \"kubernetes.io/projected/f7846011-283b-425d-834f-785b2256c0ed-kube-api-access-h5lzh\") pod \"certified-operators-b6l2f\" (UID: \"f7846011-283b-425d-834f-785b2256c0ed\") " pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.069168 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xw8gq"] Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.079507 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.079759 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xw8gq"] Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.084992 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.115972 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52wgz\" (UniqueName: \"kubernetes.io/projected/2f9194c2-80fe-4130-9b2e-e35ad1725f3f-kube-api-access-52wgz\") pod \"redhat-operators-xw8gq\" (UID: \"2f9194c2-80fe-4130-9b2e-e35ad1725f3f\") " pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.116042 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f9194c2-80fe-4130-9b2e-e35ad1725f3f-catalog-content\") pod \"redhat-operators-xw8gq\" (UID: \"2f9194c2-80fe-4130-9b2e-e35ad1725f3f\") " pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.116071 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f9194c2-80fe-4130-9b2e-e35ad1725f3f-utilities\") pod \"redhat-operators-xw8gq\" (UID: \"2f9194c2-80fe-4130-9b2e-e35ad1725f3f\") " pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.199965 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.216823 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f9194c2-80fe-4130-9b2e-e35ad1725f3f-catalog-content\") pod \"redhat-operators-xw8gq\" (UID: \"2f9194c2-80fe-4130-9b2e-e35ad1725f3f\") " pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.216871 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f9194c2-80fe-4130-9b2e-e35ad1725f3f-utilities\") pod \"redhat-operators-xw8gq\" (UID: \"2f9194c2-80fe-4130-9b2e-e35ad1725f3f\") " pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.216940 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-52wgz\" (UniqueName: \"kubernetes.io/projected/2f9194c2-80fe-4130-9b2e-e35ad1725f3f-kube-api-access-52wgz\") pod \"redhat-operators-xw8gq\" (UID: \"2f9194c2-80fe-4130-9b2e-e35ad1725f3f\") " pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.217302 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f9194c2-80fe-4130-9b2e-e35ad1725f3f-catalog-content\") pod \"redhat-operators-xw8gq\" (UID: \"2f9194c2-80fe-4130-9b2e-e35ad1725f3f\") " pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.217478 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f9194c2-80fe-4130-9b2e-e35ad1725f3f-utilities\") pod \"redhat-operators-xw8gq\" (UID: \"2f9194c2-80fe-4130-9b2e-e35ad1725f3f\") " pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.236067 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-52wgz\" (UniqueName: \"kubernetes.io/projected/2f9194c2-80fe-4130-9b2e-e35ad1725f3f-kube-api-access-52wgz\") pod \"redhat-operators-xw8gq\" (UID: \"2f9194c2-80fe-4130-9b2e-e35ad1725f3f\") " pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.377669 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5bmv" event={"ID":"9670a33c-0814-4c92-9bf2-8eff61da9fb7","Type":"ContainerStarted","Data":"1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1"} Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.380555 5112 generic.go:358] "Generic (PLEG): container finished" podID="fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca" containerID="b7fb14c6e8c423a9e6119955e57e830069ab4b8eb3251f2fdc6c4a61658f0d46" exitCode=0 Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.380681 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln45n" event={"ID":"fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca","Type":"ContainerDied","Data":"b7fb14c6e8c423a9e6119955e57e830069ab4b8eb3251f2fdc6c4a61658f0d46"} Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.399791 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k5bmv" podStartSLOduration=2.676111432 podStartE2EDuration="3.399630757s" podCreationTimestamp="2025-12-08 17:46:20 +0000 UTC" firstStartedPulling="2025-12-08 17:46:21.362704463 +0000 UTC m=+358.372253174" lastFinishedPulling="2025-12-08 17:46:22.086223768 +0000 UTC m=+359.095772499" observedRunningTime="2025-12-08 17:46:23.395010453 +0000 UTC m=+360.404559154" watchObservedRunningTime="2025-12-08 17:46:23.399630757 +0000 UTC m=+360.409179458" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.407883 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.419206 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:23 crc kubenswrapper[5112]: W1208 17:46:23.602311 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7846011_283b_425d_834f_785b2256c0ed.slice/crio-0ae8f3dced96b8e22ac5331ab837113a19138fe7606d494f34e8b8e0d2e2cc00 WatchSource:0}: Error finding container 0ae8f3dced96b8e22ac5331ab837113a19138fe7606d494f34e8b8e0d2e2cc00: Status 404 returned error can't find the container with id 0ae8f3dced96b8e22ac5331ab837113a19138fe7606d494f34e8b8e0d2e2cc00 Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.605456 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6l2f"] Dec 08 17:46:23 crc kubenswrapper[5112]: I1208 17:46:23.803341 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xw8gq"] Dec 08 17:46:24 crc kubenswrapper[5112]: I1208 17:46:24.391436 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln45n" event={"ID":"fc5f8270-78f1-4ca1-9be5-8b3c09ebb4ca","Type":"ContainerStarted","Data":"a4e1943bbe1c885a674a38ba57a1f47511abc50916f2db53b7228c66ccb943df"} Dec 08 17:46:24 crc kubenswrapper[5112]: I1208 17:46:24.395109 5112 generic.go:358] "Generic (PLEG): container finished" podID="f7846011-283b-425d-834f-785b2256c0ed" containerID="5e734954adc95bba62417188c771d8a834caba8d1fc57d4c38bfc115dc0149a1" exitCode=0 Dec 08 17:46:24 crc kubenswrapper[5112]: I1208 17:46:24.395281 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6l2f" event={"ID":"f7846011-283b-425d-834f-785b2256c0ed","Type":"ContainerDied","Data":"5e734954adc95bba62417188c771d8a834caba8d1fc57d4c38bfc115dc0149a1"} Dec 08 17:46:24 crc kubenswrapper[5112]: I1208 17:46:24.395338 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6l2f" event={"ID":"f7846011-283b-425d-834f-785b2256c0ed","Type":"ContainerStarted","Data":"0ae8f3dced96b8e22ac5331ab837113a19138fe7606d494f34e8b8e0d2e2cc00"} Dec 08 17:46:24 crc kubenswrapper[5112]: I1208 17:46:24.404585 5112 generic.go:358] "Generic (PLEG): container finished" podID="2f9194c2-80fe-4130-9b2e-e35ad1725f3f" containerID="8c61c449068b55d6a45a4b72ebe583396a484330c9bb8e395f775295f60d0061" exitCode=0 Dec 08 17:46:24 crc kubenswrapper[5112]: I1208 17:46:24.405720 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw8gq" event={"ID":"2f9194c2-80fe-4130-9b2e-e35ad1725f3f","Type":"ContainerDied","Data":"8c61c449068b55d6a45a4b72ebe583396a484330c9bb8e395f775295f60d0061"} Dec 08 17:46:24 crc kubenswrapper[5112]: I1208 17:46:24.405765 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw8gq" event={"ID":"2f9194c2-80fe-4130-9b2e-e35ad1725f3f","Type":"ContainerStarted","Data":"4b66d5beb1345b306cf6aba6d801a6c60d2ade48e41c1a1ff0dff4e9f206f100"} Dec 08 17:46:24 crc kubenswrapper[5112]: I1208 17:46:24.413009 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ln45n" podStartSLOduration=3.885697884 podStartE2EDuration="4.412992422s" podCreationTimestamp="2025-12-08 17:46:20 +0000 UTC" firstStartedPulling="2025-12-08 17:46:22.370751555 +0000 UTC m=+359.380300256" lastFinishedPulling="2025-12-08 17:46:22.898046093 +0000 UTC m=+359.907594794" observedRunningTime="2025-12-08 17:46:24.408348667 +0000 UTC m=+361.417897398" watchObservedRunningTime="2025-12-08 17:46:24.412992422 +0000 UTC m=+361.422541123" Dec 08 17:46:25 crc kubenswrapper[5112]: I1208 17:46:25.415280 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw8gq" event={"ID":"2f9194c2-80fe-4130-9b2e-e35ad1725f3f","Type":"ContainerStarted","Data":"d3ef335c11ef7b66970f3c4d7f924c2df561427bd2384a907aacfec4b266b24e"} Dec 08 17:46:26 crc kubenswrapper[5112]: I1208 17:46:26.424791 5112 generic.go:358] "Generic (PLEG): container finished" podID="2f9194c2-80fe-4130-9b2e-e35ad1725f3f" containerID="d3ef335c11ef7b66970f3c4d7f924c2df561427bd2384a907aacfec4b266b24e" exitCode=0 Dec 08 17:46:26 crc kubenswrapper[5112]: I1208 17:46:26.424882 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw8gq" event={"ID":"2f9194c2-80fe-4130-9b2e-e35ad1725f3f","Type":"ContainerDied","Data":"d3ef335c11ef7b66970f3c4d7f924c2df561427bd2384a907aacfec4b266b24e"} Dec 08 17:46:27 crc kubenswrapper[5112]: I1208 17:46:27.431215 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xw8gq" event={"ID":"2f9194c2-80fe-4130-9b2e-e35ad1725f3f","Type":"ContainerStarted","Data":"f13dc2b08d206695bb0e6a62b76834ba35d80aa858f0691d44f22567c191178d"} Dec 08 17:46:27 crc kubenswrapper[5112]: I1208 17:46:27.435569 5112 generic.go:358] "Generic (PLEG): container finished" podID="f7846011-283b-425d-834f-785b2256c0ed" containerID="ca554a5a4df8c101fc5ab4e252307c22620881eba7b5ea93db7d43ced6ef7705" exitCode=0 Dec 08 17:46:27 crc kubenswrapper[5112]: I1208 17:46:27.435670 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6l2f" event={"ID":"f7846011-283b-425d-834f-785b2256c0ed","Type":"ContainerDied","Data":"ca554a5a4df8c101fc5ab4e252307c22620881eba7b5ea93db7d43ced6ef7705"} Dec 08 17:46:27 crc kubenswrapper[5112]: I1208 17:46:27.453801 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xw8gq" podStartSLOduration=3.612404995 podStartE2EDuration="4.453776515s" podCreationTimestamp="2025-12-08 17:46:23 +0000 UTC" firstStartedPulling="2025-12-08 17:46:24.405999254 +0000 UTC m=+361.415547955" lastFinishedPulling="2025-12-08 17:46:25.247370764 +0000 UTC m=+362.256919475" observedRunningTime="2025-12-08 17:46:27.452952133 +0000 UTC m=+364.462500854" watchObservedRunningTime="2025-12-08 17:46:27.453776515 +0000 UTC m=+364.463325216" Dec 08 17:46:29 crc kubenswrapper[5112]: I1208 17:46:29.466190 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6l2f" event={"ID":"f7846011-283b-425d-834f-785b2256c0ed","Type":"ContainerStarted","Data":"4ccb33903efcf8bfb49b2beca78af4c272446a45e1016ef0ef6aab6721244e77"} Dec 08 17:46:29 crc kubenswrapper[5112]: I1208 17:46:29.490893 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b6l2f" podStartSLOduration=4.764858143 podStartE2EDuration="7.490871494s" podCreationTimestamp="2025-12-08 17:46:22 +0000 UTC" firstStartedPulling="2025-12-08 17:46:24.395652745 +0000 UTC m=+361.405201446" lastFinishedPulling="2025-12-08 17:46:27.121666106 +0000 UTC m=+364.131214797" observedRunningTime="2025-12-08 17:46:29.48957967 +0000 UTC m=+366.499128391" watchObservedRunningTime="2025-12-08 17:46:29.490871494 +0000 UTC m=+366.500420195" Dec 08 17:46:30 crc kubenswrapper[5112]: I1208 17:46:30.815779 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:30 crc kubenswrapper[5112]: I1208 17:46:30.816107 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:30 crc kubenswrapper[5112]: I1208 17:46:30.891151 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:31 crc kubenswrapper[5112]: I1208 17:46:31.005985 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:31 crc kubenswrapper[5112]: I1208 17:46:31.006111 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:31 crc kubenswrapper[5112]: I1208 17:46:31.074253 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:31 crc kubenswrapper[5112]: I1208 17:46:31.523245 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:46:31 crc kubenswrapper[5112]: I1208 17:46:31.536263 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ln45n" Dec 08 17:46:33 crc kubenswrapper[5112]: I1208 17:46:33.201013 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:33 crc kubenswrapper[5112]: I1208 17:46:33.201337 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:33 crc kubenswrapper[5112]: I1208 17:46:33.246863 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:46:33 crc kubenswrapper[5112]: I1208 17:46:33.420280 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:33 crc kubenswrapper[5112]: I1208 17:46:33.420336 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:33 crc kubenswrapper[5112]: I1208 17:46:33.466672 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:33 crc kubenswrapper[5112]: I1208 17:46:33.525821 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xw8gq" Dec 08 17:46:33 crc kubenswrapper[5112]: I1208 17:46:33.538831 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b6l2f" Dec 08 17:47:11 crc kubenswrapper[5112]: I1208 17:47:11.706700 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:47:11 crc kubenswrapper[5112]: I1208 17:47:11.707302 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:47:41 crc kubenswrapper[5112]: I1208 17:47:41.709688 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:47:41 crc kubenswrapper[5112]: I1208 17:47:41.710452 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:48:11 crc kubenswrapper[5112]: I1208 17:48:11.706709 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:48:11 crc kubenswrapper[5112]: I1208 17:48:11.707210 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:48:11 crc kubenswrapper[5112]: I1208 17:48:11.707257 5112 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:48:11 crc kubenswrapper[5112]: I1208 17:48:11.707837 5112 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e997f82b6ef61dbbf8fb6c80ff4306b0d7fbb9d6ce22b1cf0188311756dd12e"} pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:48:11 crc kubenswrapper[5112]: I1208 17:48:11.707884 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" containerID="cri-o://2e997f82b6ef61dbbf8fb6c80ff4306b0d7fbb9d6ce22b1cf0188311756dd12e" gracePeriod=600 Dec 08 17:48:12 crc kubenswrapper[5112]: I1208 17:48:12.106185 5112 generic.go:358] "Generic (PLEG): container finished" podID="95e46da0-94bb-4d22-804b-b3018984cdac" containerID="2e997f82b6ef61dbbf8fb6c80ff4306b0d7fbb9d6ce22b1cf0188311756dd12e" exitCode=0 Dec 08 17:48:12 crc kubenswrapper[5112]: I1208 17:48:12.106381 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" event={"ID":"95e46da0-94bb-4d22-804b-b3018984cdac","Type":"ContainerDied","Data":"2e997f82b6ef61dbbf8fb6c80ff4306b0d7fbb9d6ce22b1cf0188311756dd12e"} Dec 08 17:48:12 crc kubenswrapper[5112]: I1208 17:48:12.106779 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" event={"ID":"95e46da0-94bb-4d22-804b-b3018984cdac","Type":"ContainerStarted","Data":"77f5ad0ee85d883c620f8b160d1de9715081e996ed78a3f7e153e91f47fae509"} Dec 08 17:48:12 crc kubenswrapper[5112]: I1208 17:48:12.106848 5112 scope.go:117] "RemoveContainer" containerID="06e99bae4932494f4de98999926cd28dc808f1a2982c7e8e2372927bc72d1153" Dec 08 17:50:11 crc kubenswrapper[5112]: I1208 17:50:11.707487 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:50:11 crc kubenswrapper[5112]: I1208 17:50:11.708192 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:50:23 crc kubenswrapper[5112]: I1208 17:50:23.558497 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:50:23 crc kubenswrapper[5112]: I1208 17:50:23.559096 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:50:41 crc kubenswrapper[5112]: I1208 17:50:41.706598 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:50:41 crc kubenswrapper[5112]: I1208 17:50:41.707132 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:51:11 crc kubenswrapper[5112]: I1208 17:51:11.707041 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:51:11 crc kubenswrapper[5112]: I1208 17:51:11.707959 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:51:11 crc kubenswrapper[5112]: I1208 17:51:11.708034 5112 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:51:11 crc kubenswrapper[5112]: I1208 17:51:11.709256 5112 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77f5ad0ee85d883c620f8b160d1de9715081e996ed78a3f7e153e91f47fae509"} pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:51:11 crc kubenswrapper[5112]: I1208 17:51:11.709409 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" containerID="cri-o://77f5ad0ee85d883c620f8b160d1de9715081e996ed78a3f7e153e91f47fae509" gracePeriod=600 Dec 08 17:51:12 crc kubenswrapper[5112]: I1208 17:51:12.346833 5112 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:51:12 crc kubenswrapper[5112]: I1208 17:51:12.372256 5112 generic.go:358] "Generic (PLEG): container finished" podID="95e46da0-94bb-4d22-804b-b3018984cdac" containerID="77f5ad0ee85d883c620f8b160d1de9715081e996ed78a3f7e153e91f47fae509" exitCode=0 Dec 08 17:51:12 crc kubenswrapper[5112]: I1208 17:51:12.372308 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" event={"ID":"95e46da0-94bb-4d22-804b-b3018984cdac","Type":"ContainerDied","Data":"77f5ad0ee85d883c620f8b160d1de9715081e996ed78a3f7e153e91f47fae509"} Dec 08 17:51:12 crc kubenswrapper[5112]: I1208 17:51:12.372376 5112 scope.go:117] "RemoveContainer" containerID="2e997f82b6ef61dbbf8fb6c80ff4306b0d7fbb9d6ce22b1cf0188311756dd12e" Dec 08 17:51:13 crc kubenswrapper[5112]: I1208 17:51:13.381066 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" event={"ID":"95e46da0-94bb-4d22-804b-b3018984cdac","Type":"ContainerStarted","Data":"240b1d29409d9f35aedfce10e5ba170d923c2b90de94cecbc02a5feba56821b7"} Dec 08 17:51:25 crc kubenswrapper[5112]: I1208 17:51:25.676610 5112 ???:1] "http: TLS handshake error from 192.168.126.11:40394: no serving certificate available for the kubelet" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.108839 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf"] Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.109812 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" podUID="472d4dbe-4674-43ba-98da-98502eccb960" containerName="kube-rbac-proxy" containerID="cri-o://f144781c243b5270f65ed3ad052edfb4bd18a942565a3ad88814dfcfbff114c6" gracePeriod=30 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.110307 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" podUID="472d4dbe-4674-43ba-98da-98502eccb960" containerName="ovnkube-cluster-manager" containerID="cri-o://0d4a0df1b413953ea22d933f6d1c17cce51ce61ba86fb54a1b5cef34411d7394" gracePeriod=30 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.360252 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ng27z"] Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.360960 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="ovn-controller" containerID="cri-o://a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f" gracePeriod=30 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.360971 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="sbdb" containerID="cri-o://410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783" gracePeriod=30 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.361063 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455" gracePeriod=30 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.361072 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="kube-rbac-proxy-node" containerID="cri-o://ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa" gracePeriod=30 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.361126 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="ovn-acl-logging" containerID="cri-o://6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b" gracePeriod=30 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.361142 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="northd" containerID="cri-o://f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3" gracePeriod=30 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.361125 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="nbdb" containerID="cri-o://a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9" gracePeriod=30 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.367056 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.396608 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh"] Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.397153 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="472d4dbe-4674-43ba-98da-98502eccb960" containerName="ovnkube-cluster-manager" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.397170 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="472d4dbe-4674-43ba-98da-98502eccb960" containerName="ovnkube-cluster-manager" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.397198 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="472d4dbe-4674-43ba-98da-98502eccb960" containerName="kube-rbac-proxy" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.397204 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="472d4dbe-4674-43ba-98da-98502eccb960" containerName="kube-rbac-proxy" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.397326 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="472d4dbe-4674-43ba-98da-98502eccb960" containerName="ovnkube-cluster-manager" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.397342 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="472d4dbe-4674-43ba-98da-98502eccb960" containerName="kube-rbac-proxy" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.398384 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="ovnkube-controller" containerID="cri-o://db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a" gracePeriod=30 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.405674 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.478972 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/472d4dbe-4674-43ba-98da-98502eccb960-ovn-control-plane-metrics-cert\") pod \"472d4dbe-4674-43ba-98da-98502eccb960\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.479126 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/472d4dbe-4674-43ba-98da-98502eccb960-ovnkube-config\") pod \"472d4dbe-4674-43ba-98da-98502eccb960\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.479188 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv8p6\" (UniqueName: \"kubernetes.io/projected/472d4dbe-4674-43ba-98da-98502eccb960-kube-api-access-sv8p6\") pod \"472d4dbe-4674-43ba-98da-98502eccb960\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.479215 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/472d4dbe-4674-43ba-98da-98502eccb960-env-overrides\") pod \"472d4dbe-4674-43ba-98da-98502eccb960\" (UID: \"472d4dbe-4674-43ba-98da-98502eccb960\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.480299 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/472d4dbe-4674-43ba-98da-98502eccb960-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "472d4dbe-4674-43ba-98da-98502eccb960" (UID: "472d4dbe-4674-43ba-98da-98502eccb960"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.480400 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/472d4dbe-4674-43ba-98da-98502eccb960-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "472d4dbe-4674-43ba-98da-98502eccb960" (UID: "472d4dbe-4674-43ba-98da-98502eccb960"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.485892 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472d4dbe-4674-43ba-98da-98502eccb960-kube-api-access-sv8p6" (OuterVolumeSpecName: "kube-api-access-sv8p6") pod "472d4dbe-4674-43ba-98da-98502eccb960" (UID: "472d4dbe-4674-43ba-98da-98502eccb960"). InnerVolumeSpecName "kube-api-access-sv8p6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.485955 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/472d4dbe-4674-43ba-98da-98502eccb960-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "472d4dbe-4674-43ba-98da-98502eccb960" (UID: "472d4dbe-4674-43ba-98da-98502eccb960"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.581051 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/54334f53-7b16-49c6-8c38-96656aa7cad0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-vcgsh\" (UID: \"54334f53-7b16-49c6-8c38-96656aa7cad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.581145 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzz95\" (UniqueName: \"kubernetes.io/projected/54334f53-7b16-49c6-8c38-96656aa7cad0-kube-api-access-xzz95\") pod \"ovnkube-control-plane-97c9b6c48-vcgsh\" (UID: \"54334f53-7b16-49c6-8c38-96656aa7cad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.581210 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/54334f53-7b16-49c6-8c38-96656aa7cad0-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-vcgsh\" (UID: \"54334f53-7b16-49c6-8c38-96656aa7cad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.581343 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/54334f53-7b16-49c6-8c38-96656aa7cad0-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-vcgsh\" (UID: \"54334f53-7b16-49c6-8c38-96656aa7cad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.581476 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sv8p6\" (UniqueName: \"kubernetes.io/projected/472d4dbe-4674-43ba-98da-98502eccb960-kube-api-access-sv8p6\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.581486 5112 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/472d4dbe-4674-43ba-98da-98502eccb960-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.581496 5112 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/472d4dbe-4674-43ba-98da-98502eccb960-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.581507 5112 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/472d4dbe-4674-43ba-98da-98502eccb960-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.638520 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ng27z_0510de3f-316a-4902-a746-a746c3ce594c/ovn-acl-logging/0.log" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.638997 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ng27z_0510de3f-316a-4902-a746-a746c3ce594c/ovn-controller/0.log" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.639517 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.682213 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/54334f53-7b16-49c6-8c38-96656aa7cad0-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-vcgsh\" (UID: \"54334f53-7b16-49c6-8c38-96656aa7cad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.682271 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/54334f53-7b16-49c6-8c38-96656aa7cad0-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-vcgsh\" (UID: \"54334f53-7b16-49c6-8c38-96656aa7cad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.682321 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/54334f53-7b16-49c6-8c38-96656aa7cad0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-vcgsh\" (UID: \"54334f53-7b16-49c6-8c38-96656aa7cad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.682337 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xzz95\" (UniqueName: \"kubernetes.io/projected/54334f53-7b16-49c6-8c38-96656aa7cad0-kube-api-access-xzz95\") pod \"ovnkube-control-plane-97c9b6c48-vcgsh\" (UID: \"54334f53-7b16-49c6-8c38-96656aa7cad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.683214 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/54334f53-7b16-49c6-8c38-96656aa7cad0-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-vcgsh\" (UID: \"54334f53-7b16-49c6-8c38-96656aa7cad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.683609 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/54334f53-7b16-49c6-8c38-96656aa7cad0-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-vcgsh\" (UID: \"54334f53-7b16-49c6-8c38-96656aa7cad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.687610 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/54334f53-7b16-49c6-8c38-96656aa7cad0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-vcgsh\" (UID: \"54334f53-7b16-49c6-8c38-96656aa7cad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.693806 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xtt72"] Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694282 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="ovn-acl-logging" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694300 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="ovn-acl-logging" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694311 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="nbdb" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694318 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="nbdb" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694324 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="ovnkube-controller" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694330 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="ovnkube-controller" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694342 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="kubecfg-setup" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694348 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="kubecfg-setup" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694355 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="ovn-controller" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694360 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="ovn-controller" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694374 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="sbdb" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694379 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="sbdb" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694387 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="northd" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694393 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="northd" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694402 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="kube-rbac-proxy-node" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694408 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="kube-rbac-proxy-node" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694419 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694424 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694521 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="ovn-controller" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694531 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="nbdb" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694538 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694544 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="kube-rbac-proxy-node" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694553 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="ovn-acl-logging" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694561 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="ovnkube-controller" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694569 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="sbdb" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.694575 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="0510de3f-316a-4902-a746-a746c3ce594c" containerName="northd" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.698767 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.703378 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ng27z_0510de3f-316a-4902-a746-a746c3ce594c/ovn-acl-logging/0.log" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.703993 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ng27z_0510de3f-316a-4902-a746-a746c3ce594c/ovn-controller/0.log" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.704881 5112 generic.go:358] "Generic (PLEG): container finished" podID="0510de3f-316a-4902-a746-a746c3ce594c" containerID="db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a" exitCode=0 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705017 5112 generic.go:358] "Generic (PLEG): container finished" podID="0510de3f-316a-4902-a746-a746c3ce594c" containerID="410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783" exitCode=0 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705152 5112 generic.go:358] "Generic (PLEG): container finished" podID="0510de3f-316a-4902-a746-a746c3ce594c" containerID="a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9" exitCode=0 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705248 5112 generic.go:358] "Generic (PLEG): container finished" podID="0510de3f-316a-4902-a746-a746c3ce594c" containerID="f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3" exitCode=0 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705348 5112 generic.go:358] "Generic (PLEG): container finished" podID="0510de3f-316a-4902-a746-a746c3ce594c" containerID="57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455" exitCode=0 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705441 5112 generic.go:358] "Generic (PLEG): container finished" podID="0510de3f-316a-4902-a746-a746c3ce594c" containerID="ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa" exitCode=0 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.704968 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerDied","Data":"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705537 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerDied","Data":"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705557 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerDied","Data":"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705566 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerDied","Data":"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705577 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerDied","Data":"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705040 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705596 5112 scope.go:117] "RemoveContainer" containerID="db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705586 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerDied","Data":"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705700 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705719 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705726 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705738 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerDied","Data":"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705750 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705759 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705765 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705771 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705778 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705784 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705790 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705795 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705799 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.705510 5112 generic.go:358] "Generic (PLEG): container finished" podID="0510de3f-316a-4902-a746-a746c3ce594c" containerID="6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b" exitCode=143 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.706697 5112 generic.go:358] "Generic (PLEG): container finished" podID="0510de3f-316a-4902-a746-a746c3ce594c" containerID="a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f" exitCode=143 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.706778 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerDied","Data":"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.706952 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.706962 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.706967 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.706972 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.706977 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.706982 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.706988 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.706996 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.707007 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.707020 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ng27z" event={"ID":"0510de3f-316a-4902-a746-a746c3ce594c","Type":"ContainerDied","Data":"23a017c1e028b6e6e5891a0947073823a15913426838b1754ef91de5e8f88124"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.707032 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.707039 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.707046 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.707052 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.707058 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.707063 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.707069 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.707092 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.707098 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.710265 5112 generic.go:358] "Generic (PLEG): container finished" podID="472d4dbe-4674-43ba-98da-98502eccb960" containerID="0d4a0df1b413953ea22d933f6d1c17cce51ce61ba86fb54a1b5cef34411d7394" exitCode=0 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.710278 5112 generic.go:358] "Generic (PLEG): container finished" podID="472d4dbe-4674-43ba-98da-98502eccb960" containerID="f144781c243b5270f65ed3ad052edfb4bd18a942565a3ad88814dfcfbff114c6" exitCode=0 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.710333 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" event={"ID":"472d4dbe-4674-43ba-98da-98502eccb960","Type":"ContainerDied","Data":"0d4a0df1b413953ea22d933f6d1c17cce51ce61ba86fb54a1b5cef34411d7394"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.710345 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d4a0df1b413953ea22d933f6d1c17cce51ce61ba86fb54a1b5cef34411d7394"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.710351 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f144781c243b5270f65ed3ad052edfb4bd18a942565a3ad88814dfcfbff114c6"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.710359 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" event={"ID":"472d4dbe-4674-43ba-98da-98502eccb960","Type":"ContainerDied","Data":"f144781c243b5270f65ed3ad052edfb4bd18a942565a3ad88814dfcfbff114c6"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.710365 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d4a0df1b413953ea22d933f6d1c17cce51ce61ba86fb54a1b5cef34411d7394"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.710371 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f144781c243b5270f65ed3ad052edfb4bd18a942565a3ad88814dfcfbff114c6"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.710378 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" event={"ID":"472d4dbe-4674-43ba-98da-98502eccb960","Type":"ContainerDied","Data":"4fe90487572f15dee0fd51ad86b86ff796accb27f36bbd9d0738df2a8cd05aed"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.710388 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d4a0df1b413953ea22d933f6d1c17cce51ce61ba86fb54a1b5cef34411d7394"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.710393 5112 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f144781c243b5270f65ed3ad052edfb4bd18a942565a3ad88814dfcfbff114c6"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.710587 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzz95\" (UniqueName: \"kubernetes.io/projected/54334f53-7b16-49c6-8c38-96656aa7cad0-kube-api-access-xzz95\") pod \"ovnkube-control-plane-97c9b6c48-vcgsh\" (UID: \"54334f53-7b16-49c6-8c38-96656aa7cad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.711029 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.711910 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kvv4v_288ee203-be3f-4176-90b2-7d95ee47aee8/kube-multus/0.log" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.711942 5112 generic.go:358] "Generic (PLEG): container finished" podID="288ee203-be3f-4176-90b2-7d95ee47aee8" containerID="aeb0708a96645938003ab2d6f651e2c6c0996b2252673869e193349197d88b1f" exitCode=2 Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.712013 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kvv4v" event={"ID":"288ee203-be3f-4176-90b2-7d95ee47aee8","Type":"ContainerDied","Data":"aeb0708a96645938003ab2d6f651e2c6c0996b2252673869e193349197d88b1f"} Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.712487 5112 scope.go:117] "RemoveContainer" containerID="aeb0708a96645938003ab2d6f651e2c6c0996b2252673869e193349197d88b1f" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.727958 5112 scope.go:117] "RemoveContainer" containerID="410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.743486 5112 scope.go:117] "RemoveContainer" containerID="a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.757589 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf"] Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.758684 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-b7fmf"] Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.761254 5112 scope.go:117] "RemoveContainer" containerID="f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.773857 5112 scope.go:117] "RemoveContainer" containerID="57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783194 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-run-ovn-kubernetes\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783250 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-etc-openvswitch\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783273 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-slash\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783307 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vcrm\" (UniqueName: \"kubernetes.io/projected/0510de3f-316a-4902-a746-a746c3ce594c-kube-api-access-7vcrm\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783305 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783311 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783338 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-slash" (OuterVolumeSpecName: "host-slash") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783354 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-env-overrides\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783436 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-node-log\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783458 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-run-netns\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783490 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783519 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-ovnkube-config\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783538 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-ovnkube-script-lib\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783556 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0510de3f-316a-4902-a746-a746c3ce594c-ovn-node-metrics-cert\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783578 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-kubelet\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783600 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-ovn\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783615 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-cni-bin\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783633 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-systemd-units\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783674 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-cni-netd\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783700 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-openvswitch\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783719 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-log-socket\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783734 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-var-lib-openvswitch\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783767 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-systemd\") pod \"0510de3f-316a-4902-a746-a746c3ce594c\" (UID: \"0510de3f-316a-4902-a746-a746c3ce594c\") " Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.783991 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784034 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784065 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784097 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784118 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784127 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784146 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-log-socket" (OuterVolumeSpecName: "log-socket") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784144 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784144 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-node-log" (OuterVolumeSpecName: "node-log") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784152 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784174 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784177 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784344 5112 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784362 5112 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784374 5112 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-slash\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784385 5112 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784396 5112 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-node-log\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784406 5112 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784420 5112 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784432 5112 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784442 5112 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784449 5112 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784458 5112 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784458 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784465 5112 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784516 5112 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784529 5112 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-log-socket\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784543 5112 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.784493 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.787561 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0510de3f-316a-4902-a746-a746c3ce594c-kube-api-access-7vcrm" (OuterVolumeSpecName: "kube-api-access-7vcrm") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "kube-api-access-7vcrm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.788395 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0510de3f-316a-4902-a746-a746c3ce594c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.794054 5112 scope.go:117] "RemoveContainer" containerID="ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.799481 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0510de3f-316a-4902-a746-a746c3ce594c" (UID: "0510de3f-316a-4902-a746-a746c3ce594c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.813456 5112 scope.go:117] "RemoveContainer" containerID="6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.815946 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.831209 5112 scope.go:117] "RemoveContainer" containerID="a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f" Dec 08 17:52:07 crc kubenswrapper[5112]: W1208 17:52:07.836839 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54334f53_7b16_49c6_8c38_96656aa7cad0.slice/crio-25d3a8c4bb12d6321b89c80fa41a41ac053986b2faf720fc4a0796e192922cca WatchSource:0}: Error finding container 25d3a8c4bb12d6321b89c80fa41a41ac053986b2faf720fc4a0796e192922cca: Status 404 returned error can't find the container with id 25d3a8c4bb12d6321b89c80fa41a41ac053986b2faf720fc4a0796e192922cca Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.846350 5112 scope.go:117] "RemoveContainer" containerID="ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.859449 5112 scope.go:117] "RemoveContainer" containerID="db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a" Dec 08 17:52:07 crc kubenswrapper[5112]: E1208 17:52:07.860022 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a\": container with ID starting with db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a not found: ID does not exist" containerID="db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.860099 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a"} err="failed to get container status \"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a\": rpc error: code = NotFound desc = could not find container \"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a\": container with ID starting with db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.860131 5112 scope.go:117] "RemoveContainer" containerID="410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783" Dec 08 17:52:07 crc kubenswrapper[5112]: E1208 17:52:07.860383 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783\": container with ID starting with 410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783 not found: ID does not exist" containerID="410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.860413 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783"} err="failed to get container status \"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783\": rpc error: code = NotFound desc = could not find container \"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783\": container with ID starting with 410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.860430 5112 scope.go:117] "RemoveContainer" containerID="a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9" Dec 08 17:52:07 crc kubenswrapper[5112]: E1208 17:52:07.861778 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9\": container with ID starting with a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9 not found: ID does not exist" containerID="a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.861813 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9"} err="failed to get container status \"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9\": rpc error: code = NotFound desc = could not find container \"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9\": container with ID starting with a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.861855 5112 scope.go:117] "RemoveContainer" containerID="f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3" Dec 08 17:52:07 crc kubenswrapper[5112]: E1208 17:52:07.862392 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3\": container with ID starting with f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3 not found: ID does not exist" containerID="f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.862437 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3"} err="failed to get container status \"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3\": rpc error: code = NotFound desc = could not find container \"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3\": container with ID starting with f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.862463 5112 scope.go:117] "RemoveContainer" containerID="57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455" Dec 08 17:52:07 crc kubenswrapper[5112]: E1208 17:52:07.862882 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455\": container with ID starting with 57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455 not found: ID does not exist" containerID="57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.862913 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455"} err="failed to get container status \"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455\": rpc error: code = NotFound desc = could not find container \"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455\": container with ID starting with 57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.862931 5112 scope.go:117] "RemoveContainer" containerID="ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa" Dec 08 17:52:07 crc kubenswrapper[5112]: E1208 17:52:07.863409 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa\": container with ID starting with ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa not found: ID does not exist" containerID="ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.863437 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa"} err="failed to get container status \"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa\": rpc error: code = NotFound desc = could not find container \"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa\": container with ID starting with ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.863457 5112 scope.go:117] "RemoveContainer" containerID="6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b" Dec 08 17:52:07 crc kubenswrapper[5112]: E1208 17:52:07.863736 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b\": container with ID starting with 6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b not found: ID does not exist" containerID="6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.863760 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b"} err="failed to get container status \"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b\": rpc error: code = NotFound desc = could not find container \"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b\": container with ID starting with 6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.863818 5112 scope.go:117] "RemoveContainer" containerID="a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f" Dec 08 17:52:07 crc kubenswrapper[5112]: E1208 17:52:07.864048 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f\": container with ID starting with a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f not found: ID does not exist" containerID="a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.864071 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f"} err="failed to get container status \"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f\": rpc error: code = NotFound desc = could not find container \"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f\": container with ID starting with a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.864146 5112 scope.go:117] "RemoveContainer" containerID="ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072" Dec 08 17:52:07 crc kubenswrapper[5112]: E1208 17:52:07.864652 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\": container with ID starting with ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072 not found: ID does not exist" containerID="ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.864683 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072"} err="failed to get container status \"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\": rpc error: code = NotFound desc = could not find container \"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\": container with ID starting with ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.864701 5112 scope.go:117] "RemoveContainer" containerID="db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.865048 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a"} err="failed to get container status \"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a\": rpc error: code = NotFound desc = could not find container \"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a\": container with ID starting with db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.865088 5112 scope.go:117] "RemoveContainer" containerID="410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.865349 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783"} err="failed to get container status \"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783\": rpc error: code = NotFound desc = could not find container \"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783\": container with ID starting with 410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.865371 5112 scope.go:117] "RemoveContainer" containerID="a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.865621 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9"} err="failed to get container status \"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9\": rpc error: code = NotFound desc = could not find container \"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9\": container with ID starting with a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.865645 5112 scope.go:117] "RemoveContainer" containerID="f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.865861 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3"} err="failed to get container status \"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3\": rpc error: code = NotFound desc = could not find container \"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3\": container with ID starting with f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.865879 5112 scope.go:117] "RemoveContainer" containerID="57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.866067 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455"} err="failed to get container status \"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455\": rpc error: code = NotFound desc = could not find container \"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455\": container with ID starting with 57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.866117 5112 scope.go:117] "RemoveContainer" containerID="ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.866307 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa"} err="failed to get container status \"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa\": rpc error: code = NotFound desc = could not find container \"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa\": container with ID starting with ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.866325 5112 scope.go:117] "RemoveContainer" containerID="6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.866554 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b"} err="failed to get container status \"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b\": rpc error: code = NotFound desc = could not find container \"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b\": container with ID starting with 6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.866573 5112 scope.go:117] "RemoveContainer" containerID="a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.866819 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f"} err="failed to get container status \"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f\": rpc error: code = NotFound desc = could not find container \"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f\": container with ID starting with a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.866851 5112 scope.go:117] "RemoveContainer" containerID="ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.867060 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072"} err="failed to get container status \"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\": rpc error: code = NotFound desc = could not find container \"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\": container with ID starting with ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.867102 5112 scope.go:117] "RemoveContainer" containerID="db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.867316 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a"} err="failed to get container status \"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a\": rpc error: code = NotFound desc = could not find container \"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a\": container with ID starting with db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.867335 5112 scope.go:117] "RemoveContainer" containerID="410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.867871 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783"} err="failed to get container status \"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783\": rpc error: code = NotFound desc = could not find container \"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783\": container with ID starting with 410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.867894 5112 scope.go:117] "RemoveContainer" containerID="a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.868774 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9"} err="failed to get container status \"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9\": rpc error: code = NotFound desc = could not find container \"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9\": container with ID starting with a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.868825 5112 scope.go:117] "RemoveContainer" containerID="f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.869221 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3"} err="failed to get container status \"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3\": rpc error: code = NotFound desc = could not find container \"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3\": container with ID starting with f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.869245 5112 scope.go:117] "RemoveContainer" containerID="57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.869460 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455"} err="failed to get container status \"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455\": rpc error: code = NotFound desc = could not find container \"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455\": container with ID starting with 57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.869494 5112 scope.go:117] "RemoveContainer" containerID="ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.869791 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa"} err="failed to get container status \"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa\": rpc error: code = NotFound desc = could not find container \"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa\": container with ID starting with ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.869824 5112 scope.go:117] "RemoveContainer" containerID="6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.870114 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b"} err="failed to get container status \"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b\": rpc error: code = NotFound desc = could not find container \"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b\": container with ID starting with 6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.870137 5112 scope.go:117] "RemoveContainer" containerID="a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.870444 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f"} err="failed to get container status \"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f\": rpc error: code = NotFound desc = could not find container \"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f\": container with ID starting with a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.870466 5112 scope.go:117] "RemoveContainer" containerID="ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.870658 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072"} err="failed to get container status \"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\": rpc error: code = NotFound desc = could not find container \"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\": container with ID starting with ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.870689 5112 scope.go:117] "RemoveContainer" containerID="db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.870980 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a"} err="failed to get container status \"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a\": rpc error: code = NotFound desc = could not find container \"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a\": container with ID starting with db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.871064 5112 scope.go:117] "RemoveContainer" containerID="410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.871297 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783"} err="failed to get container status \"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783\": rpc error: code = NotFound desc = could not find container \"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783\": container with ID starting with 410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.871328 5112 scope.go:117] "RemoveContainer" containerID="a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.871630 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9"} err="failed to get container status \"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9\": rpc error: code = NotFound desc = could not find container \"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9\": container with ID starting with a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.871653 5112 scope.go:117] "RemoveContainer" containerID="f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.872434 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3"} err="failed to get container status \"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3\": rpc error: code = NotFound desc = could not find container \"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3\": container with ID starting with f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.872454 5112 scope.go:117] "RemoveContainer" containerID="57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.872675 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455"} err="failed to get container status \"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455\": rpc error: code = NotFound desc = could not find container \"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455\": container with ID starting with 57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.872693 5112 scope.go:117] "RemoveContainer" containerID="ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.872863 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa"} err="failed to get container status \"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa\": rpc error: code = NotFound desc = could not find container \"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa\": container with ID starting with ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.872882 5112 scope.go:117] "RemoveContainer" containerID="6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.873137 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b"} err="failed to get container status \"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b\": rpc error: code = NotFound desc = could not find container \"6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b\": container with ID starting with 6ea4775c45f66d7a80a761d5a9692c4a1d25e6422138f45db953efe79b67535b not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.873164 5112 scope.go:117] "RemoveContainer" containerID="a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.873453 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f"} err="failed to get container status \"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f\": rpc error: code = NotFound desc = could not find container \"a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f\": container with ID starting with a254fd68045c83555d54d99c38b26a7cceb4e82c07cf657367dc7e54dcfc5b8f not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.873480 5112 scope.go:117] "RemoveContainer" containerID="ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.873825 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072"} err="failed to get container status \"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\": rpc error: code = NotFound desc = could not find container \"ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072\": container with ID starting with ddf7ad90683c0247b1d91e130cf8c3c5cb15d668c9e58d00df0a498a52510072 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.873858 5112 scope.go:117] "RemoveContainer" containerID="db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.874055 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a"} err="failed to get container status \"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a\": rpc error: code = NotFound desc = could not find container \"db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a\": container with ID starting with db5365b52d9af27a994e13a4988cb64763378af99640c1716e4a02b2ca0a5d7a not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.874073 5112 scope.go:117] "RemoveContainer" containerID="410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.876424 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783"} err="failed to get container status \"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783\": rpc error: code = NotFound desc = could not find container \"410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783\": container with ID starting with 410005bc0af8b3ea126b76aeff05cf047c47f2204015d69f9d73d763e3cb4783 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.876678 5112 scope.go:117] "RemoveContainer" containerID="a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.877018 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9"} err="failed to get container status \"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9\": rpc error: code = NotFound desc = could not find container \"a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9\": container with ID starting with a8189fc4ecb786878f1710afcf5b0018671faa606a6499c6776ee03ced5fc5a9 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.877095 5112 scope.go:117] "RemoveContainer" containerID="f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.880552 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3"} err="failed to get container status \"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3\": rpc error: code = NotFound desc = could not find container \"f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3\": container with ID starting with f93b490d86018f64083ec5026a047c7e8a7954229cd86a40937b6fcdd76f6de3 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.880609 5112 scope.go:117] "RemoveContainer" containerID="57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.881857 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455"} err="failed to get container status \"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455\": rpc error: code = NotFound desc = could not find container \"57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455\": container with ID starting with 57f10b0d437771a991bc9480367fc4645b551284ff9e81eafbceeeabb13d6455 not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.881910 5112 scope.go:117] "RemoveContainer" containerID="ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.882224 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa"} err="failed to get container status \"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa\": rpc error: code = NotFound desc = could not find container \"ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa\": container with ID starting with ba08fbd099185890d342cda33b8aa8e8ff1f39b4db64cee879f476f3a32e2faa not found: ID does not exist" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.886183 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-run-ovn-kubernetes\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.886225 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-kubelet\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.886284 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gdmb\" (UniqueName: \"kubernetes.io/projected/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-kube-api-access-4gdmb\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.886340 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-slash\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.886362 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-node-log\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.886379 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-ovn-node-metrics-cert\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.886397 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-log-socket\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.886413 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-ovnkube-script-lib\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.886438 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-var-lib-openvswitch\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.886453 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-run-openvswitch\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887383 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-run-systemd\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887423 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-cni-netd\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887488 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-systemd-units\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887506 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-etc-openvswitch\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887574 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-run-netns\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887613 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-ovnkube-config\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887651 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-run-ovn\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887689 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-env-overrides\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887715 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-cni-bin\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887730 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887778 5112 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0510de3f-316a-4902-a746-a746c3ce594c-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887792 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7vcrm\" (UniqueName: \"kubernetes.io/projected/0510de3f-316a-4902-a746-a746c3ce594c-kube-api-access-7vcrm\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887801 5112 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887809 5112 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0510de3f-316a-4902-a746-a746c3ce594c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.887817 5112 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0510de3f-316a-4902-a746-a746c3ce594c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989360 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-run-ovn\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989416 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-env-overrides\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989436 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-cni-bin\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989454 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989476 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-run-ovn-kubernetes\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989499 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-kubelet\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989524 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4gdmb\" (UniqueName: \"kubernetes.io/projected/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-kube-api-access-4gdmb\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989551 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-slash\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989571 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-node-log\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989587 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-ovn-node-metrics-cert\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989605 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-log-socket\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989651 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-ovnkube-script-lib\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989676 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-var-lib-openvswitch\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989690 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-run-openvswitch\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989732 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-run-systemd\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989748 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-cni-netd\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989814 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-systemd-units\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989833 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-etc-openvswitch\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989858 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-run-netns\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.989881 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-ovnkube-config\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.990553 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-ovnkube-config\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.990611 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-run-ovn\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.990927 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-env-overrides\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.990963 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-cni-bin\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.990985 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.991008 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-run-ovn-kubernetes\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.991028 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-kubelet\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.991354 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-slash\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.991383 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-node-log\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.991996 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-run-systemd\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.992117 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-run-openvswitch\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.992123 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-cni-netd\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.992146 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-etc-openvswitch\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.992156 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-systemd-units\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.992138 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-log-socket\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.992144 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-var-lib-openvswitch\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.992173 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-host-run-netns\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.992586 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-ovnkube-script-lib\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:07 crc kubenswrapper[5112]: I1208 17:52:07.996914 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-ovn-node-metrics-cert\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:08 crc kubenswrapper[5112]: I1208 17:52:08.007128 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gdmb\" (UniqueName: \"kubernetes.io/projected/baa6b91a-98d9-4e9b-a6b9-e98a34e82b71-kube-api-access-4gdmb\") pod \"ovnkube-node-xtt72\" (UID: \"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:08 crc kubenswrapper[5112]: I1208 17:52:08.020525 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:08 crc kubenswrapper[5112]: I1208 17:52:08.051992 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ng27z"] Dec 08 17:52:08 crc kubenswrapper[5112]: I1208 17:52:08.057335 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ng27z"] Dec 08 17:52:08 crc kubenswrapper[5112]: I1208 17:52:08.721422 5112 generic.go:358] "Generic (PLEG): container finished" podID="baa6b91a-98d9-4e9b-a6b9-e98a34e82b71" containerID="adf74f266a682693be61f265be4ebe676f8d7882e0a171448db8052371806da7" exitCode=0 Dec 08 17:52:08 crc kubenswrapper[5112]: I1208 17:52:08.721530 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" event={"ID":"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71","Type":"ContainerDied","Data":"adf74f266a682693be61f265be4ebe676f8d7882e0a171448db8052371806da7"} Dec 08 17:52:08 crc kubenswrapper[5112]: I1208 17:52:08.721798 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" event={"ID":"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71","Type":"ContainerStarted","Data":"c6a7faacac9700a28f72514b8fba829f684f49a2921544c6e714bba8c0afba85"} Dec 08 17:52:08 crc kubenswrapper[5112]: I1208 17:52:08.724379 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" event={"ID":"54334f53-7b16-49c6-8c38-96656aa7cad0","Type":"ContainerStarted","Data":"d0ed1889f83d96b34110c0e850940354a55e24bd60fea1e766f1d18818c94c7d"} Dec 08 17:52:08 crc kubenswrapper[5112]: I1208 17:52:08.724448 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" event={"ID":"54334f53-7b16-49c6-8c38-96656aa7cad0","Type":"ContainerStarted","Data":"18daa95393d77c36b727fc336237403c911874cbf224cfcf3b5f5b9863cdcb23"} Dec 08 17:52:08 crc kubenswrapper[5112]: I1208 17:52:08.724461 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" event={"ID":"54334f53-7b16-49c6-8c38-96656aa7cad0","Type":"ContainerStarted","Data":"25d3a8c4bb12d6321b89c80fa41a41ac053986b2faf720fc4a0796e192922cca"} Dec 08 17:52:08 crc kubenswrapper[5112]: I1208 17:52:08.727564 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kvv4v_288ee203-be3f-4176-90b2-7d95ee47aee8/kube-multus/0.log" Dec 08 17:52:08 crc kubenswrapper[5112]: I1208 17:52:08.727631 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kvv4v" event={"ID":"288ee203-be3f-4176-90b2-7d95ee47aee8","Type":"ContainerStarted","Data":"21d6980cd5a14ff9a66ec819515fff20f680e16f03232a4ac0629f832dd432eb"} Dec 08 17:52:08 crc kubenswrapper[5112]: I1208 17:52:08.764596 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-vcgsh" podStartSLOduration=1.764583198 podStartE2EDuration="1.764583198s" podCreationTimestamp="2025-12-08 17:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:52:08.762820651 +0000 UTC m=+705.772369362" watchObservedRunningTime="2025-12-08 17:52:08.764583198 +0000 UTC m=+705.774131899" Dec 08 17:52:09 crc kubenswrapper[5112]: I1208 17:52:09.324563 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0510de3f-316a-4902-a746-a746c3ce594c" path="/var/lib/kubelet/pods/0510de3f-316a-4902-a746-a746c3ce594c/volumes" Dec 08 17:52:09 crc kubenswrapper[5112]: I1208 17:52:09.325774 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472d4dbe-4674-43ba-98da-98502eccb960" path="/var/lib/kubelet/pods/472d4dbe-4674-43ba-98da-98502eccb960/volumes" Dec 08 17:52:09 crc kubenswrapper[5112]: I1208 17:52:09.736976 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" event={"ID":"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71","Type":"ContainerStarted","Data":"eaa45fa6b76f5288872fa15179dad65328ea784ea4682b6758f150570f61566b"} Dec 08 17:52:09 crc kubenswrapper[5112]: I1208 17:52:09.737029 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" event={"ID":"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71","Type":"ContainerStarted","Data":"250ad758d09e3e6bf6969a52c9b3648a9927c61c07a78b3a3e0074e31c90981c"} Dec 08 17:52:09 crc kubenswrapper[5112]: I1208 17:52:09.737044 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" event={"ID":"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71","Type":"ContainerStarted","Data":"065751831f25b4380c271acd49473ae0a9981ef7b52d4dab5e727ef4ce1b1e87"} Dec 08 17:52:09 crc kubenswrapper[5112]: I1208 17:52:09.737056 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" event={"ID":"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71","Type":"ContainerStarted","Data":"4203e5525adb3b88afe3f5fa8292b4b282ef92e722367544ccc98631fe5c4596"} Dec 08 17:52:09 crc kubenswrapper[5112]: I1208 17:52:09.737067 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" event={"ID":"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71","Type":"ContainerStarted","Data":"e2a19b1305f047742658c51bd74496f20e3088c1da95a68f4e4ecb190d9040d4"} Dec 08 17:52:09 crc kubenswrapper[5112]: I1208 17:52:09.737104 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" event={"ID":"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71","Type":"ContainerStarted","Data":"734fa16e4a40203a8c0d68679ff6106adb205760bae038802f63805fb4a4ce88"} Dec 08 17:52:11 crc kubenswrapper[5112]: I1208 17:52:11.754155 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" event={"ID":"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71","Type":"ContainerStarted","Data":"fd5b506b6b1cfe2cdb9a83e4010d855f723cea209a2a2205e8984ec1eed72df3"} Dec 08 17:52:15 crc kubenswrapper[5112]: I1208 17:52:15.779368 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" event={"ID":"baa6b91a-98d9-4e9b-a6b9-e98a34e82b71","Type":"ContainerStarted","Data":"5071aedae7e074505ab56197bcf82ba8e5ddab92a850747565ba40707fb318aa"} Dec 08 17:52:15 crc kubenswrapper[5112]: I1208 17:52:15.779922 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:15 crc kubenswrapper[5112]: I1208 17:52:15.780021 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:15 crc kubenswrapper[5112]: I1208 17:52:15.780037 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:15 crc kubenswrapper[5112]: I1208 17:52:15.810276 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:15 crc kubenswrapper[5112]: I1208 17:52:15.811153 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:52:15 crc kubenswrapper[5112]: I1208 17:52:15.815785 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" podStartSLOduration=8.815768431 podStartE2EDuration="8.815768431s" podCreationTimestamp="2025-12-08 17:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:52:15.815133324 +0000 UTC m=+712.824682055" watchObservedRunningTime="2025-12-08 17:52:15.815768431 +0000 UTC m=+712.825317142" Dec 08 17:52:23 crc kubenswrapper[5112]: I1208 17:52:23.569020 5112 scope.go:117] "RemoveContainer" containerID="f144781c243b5270f65ed3ad052edfb4bd18a942565a3ad88814dfcfbff114c6" Dec 08 17:52:23 crc kubenswrapper[5112]: I1208 17:52:23.587939 5112 scope.go:117] "RemoveContainer" containerID="0d4a0df1b413953ea22d933f6d1c17cce51ce61ba86fb54a1b5cef34411d7394" Dec 08 17:52:47 crc kubenswrapper[5112]: I1208 17:52:47.813558 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xtt72" Dec 08 17:53:21 crc kubenswrapper[5112]: I1208 17:53:21.372651 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5bmv"] Dec 08 17:53:21 crc kubenswrapper[5112]: I1208 17:53:21.373580 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k5bmv" podUID="9670a33c-0814-4c92-9bf2-8eff61da9fb7" containerName="registry-server" containerID="cri-o://1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1" gracePeriod=30 Dec 08 17:53:21 crc kubenswrapper[5112]: E1208 17:53:21.479121 5112 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1 is running failed: container process not found" containerID="1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:53:21 crc kubenswrapper[5112]: E1208 17:53:21.479653 5112 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1 is running failed: container process not found" containerID="1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:53:21 crc kubenswrapper[5112]: E1208 17:53:21.479927 5112 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1 is running failed: container process not found" containerID="1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:53:21 crc kubenswrapper[5112]: E1208 17:53:21.479977 5112 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-k5bmv" podUID="9670a33c-0814-4c92-9bf2-8eff61da9fb7" containerName="registry-server" probeResult="unknown" Dec 08 17:53:21 crc kubenswrapper[5112]: I1208 17:53:21.703733 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:53:21 crc kubenswrapper[5112]: I1208 17:53:21.784733 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9670a33c-0814-4c92-9bf2-8eff61da9fb7-utilities\") pod \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\" (UID: \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\") " Dec 08 17:53:21 crc kubenswrapper[5112]: I1208 17:53:21.784933 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bphps\" (UniqueName: \"kubernetes.io/projected/9670a33c-0814-4c92-9bf2-8eff61da9fb7-kube-api-access-bphps\") pod \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\" (UID: \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\") " Dec 08 17:53:21 crc kubenswrapper[5112]: I1208 17:53:21.785053 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9670a33c-0814-4c92-9bf2-8eff61da9fb7-catalog-content\") pod \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\" (UID: \"9670a33c-0814-4c92-9bf2-8eff61da9fb7\") " Dec 08 17:53:21 crc kubenswrapper[5112]: I1208 17:53:21.785971 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9670a33c-0814-4c92-9bf2-8eff61da9fb7-utilities" (OuterVolumeSpecName: "utilities") pod "9670a33c-0814-4c92-9bf2-8eff61da9fb7" (UID: "9670a33c-0814-4c92-9bf2-8eff61da9fb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:21 crc kubenswrapper[5112]: I1208 17:53:21.792863 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9670a33c-0814-4c92-9bf2-8eff61da9fb7-kube-api-access-bphps" (OuterVolumeSpecName: "kube-api-access-bphps") pod "9670a33c-0814-4c92-9bf2-8eff61da9fb7" (UID: "9670a33c-0814-4c92-9bf2-8eff61da9fb7"). InnerVolumeSpecName "kube-api-access-bphps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:53:21 crc kubenswrapper[5112]: I1208 17:53:21.799724 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9670a33c-0814-4c92-9bf2-8eff61da9fb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9670a33c-0814-4c92-9bf2-8eff61da9fb7" (UID: "9670a33c-0814-4c92-9bf2-8eff61da9fb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:21 crc kubenswrapper[5112]: I1208 17:53:21.886767 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9670a33c-0814-4c92-9bf2-8eff61da9fb7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:21 crc kubenswrapper[5112]: I1208 17:53:21.886837 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9670a33c-0814-4c92-9bf2-8eff61da9fb7-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:21 crc kubenswrapper[5112]: I1208 17:53:21.886861 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bphps\" (UniqueName: \"kubernetes.io/projected/9670a33c-0814-4c92-9bf2-8eff61da9fb7-kube-api-access-bphps\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.168603 5112 generic.go:358] "Generic (PLEG): container finished" podID="9670a33c-0814-4c92-9bf2-8eff61da9fb7" containerID="1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1" exitCode=0 Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.168793 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5bmv" event={"ID":"9670a33c-0814-4c92-9bf2-8eff61da9fb7","Type":"ContainerDied","Data":"1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1"} Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.168824 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k5bmv" event={"ID":"9670a33c-0814-4c92-9bf2-8eff61da9fb7","Type":"ContainerDied","Data":"c74e5b8e4259047e5d96f9fb137723d80900734526f4dbc9707b6378b41dcb9a"} Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.168846 5112 scope.go:117] "RemoveContainer" containerID="1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.169013 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k5bmv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.189403 5112 scope.go:117] "RemoveContainer" containerID="fa1cad752dbabb15bd5ebc07f99b03f6b5b6a2fb61b721c2df98f115183bbb96" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.214616 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5bmv"] Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.227551 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k5bmv"] Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.249011 5112 scope.go:117] "RemoveContainer" containerID="a6f335ce8f1c3421eda08f43d3bdab1c5e496be3b4657588e665b68a8cfc3b57" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.263813 5112 scope.go:117] "RemoveContainer" containerID="1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1" Dec 08 17:53:22 crc kubenswrapper[5112]: E1208 17:53:22.264288 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1\": container with ID starting with 1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1 not found: ID does not exist" containerID="1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.264319 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1"} err="failed to get container status \"1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1\": rpc error: code = NotFound desc = could not find container \"1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1\": container with ID starting with 1526ad33138b35aef7f5ddabb3526fb7a2c6e70421a9a3781f09b008073314b1 not found: ID does not exist" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.264338 5112 scope.go:117] "RemoveContainer" containerID="fa1cad752dbabb15bd5ebc07f99b03f6b5b6a2fb61b721c2df98f115183bbb96" Dec 08 17:53:22 crc kubenswrapper[5112]: E1208 17:53:22.264673 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa1cad752dbabb15bd5ebc07f99b03f6b5b6a2fb61b721c2df98f115183bbb96\": container with ID starting with fa1cad752dbabb15bd5ebc07f99b03f6b5b6a2fb61b721c2df98f115183bbb96 not found: ID does not exist" containerID="fa1cad752dbabb15bd5ebc07f99b03f6b5b6a2fb61b721c2df98f115183bbb96" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.264721 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa1cad752dbabb15bd5ebc07f99b03f6b5b6a2fb61b721c2df98f115183bbb96"} err="failed to get container status \"fa1cad752dbabb15bd5ebc07f99b03f6b5b6a2fb61b721c2df98f115183bbb96\": rpc error: code = NotFound desc = could not find container \"fa1cad752dbabb15bd5ebc07f99b03f6b5b6a2fb61b721c2df98f115183bbb96\": container with ID starting with fa1cad752dbabb15bd5ebc07f99b03f6b5b6a2fb61b721c2df98f115183bbb96 not found: ID does not exist" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.264752 5112 scope.go:117] "RemoveContainer" containerID="a6f335ce8f1c3421eda08f43d3bdab1c5e496be3b4657588e665b68a8cfc3b57" Dec 08 17:53:22 crc kubenswrapper[5112]: E1208 17:53:22.265019 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6f335ce8f1c3421eda08f43d3bdab1c5e496be3b4657588e665b68a8cfc3b57\": container with ID starting with a6f335ce8f1c3421eda08f43d3bdab1c5e496be3b4657588e665b68a8cfc3b57 not found: ID does not exist" containerID="a6f335ce8f1c3421eda08f43d3bdab1c5e496be3b4657588e665b68a8cfc3b57" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.265040 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6f335ce8f1c3421eda08f43d3bdab1c5e496be3b4657588e665b68a8cfc3b57"} err="failed to get container status \"a6f335ce8f1c3421eda08f43d3bdab1c5e496be3b4657588e665b68a8cfc3b57\": rpc error: code = NotFound desc = could not find container \"a6f335ce8f1c3421eda08f43d3bdab1c5e496be3b4657588e665b68a8cfc3b57\": container with ID starting with a6f335ce8f1c3421eda08f43d3bdab1c5e496be3b4657588e665b68a8cfc3b57 not found: ID does not exist" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.418609 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-cpvvv"] Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.419424 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9670a33c-0814-4c92-9bf2-8eff61da9fb7" containerName="extract-content" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.419440 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="9670a33c-0814-4c92-9bf2-8eff61da9fb7" containerName="extract-content" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.419457 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9670a33c-0814-4c92-9bf2-8eff61da9fb7" containerName="registry-server" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.419466 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="9670a33c-0814-4c92-9bf2-8eff61da9fb7" containerName="registry-server" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.419475 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9670a33c-0814-4c92-9bf2-8eff61da9fb7" containerName="extract-utilities" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.419482 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="9670a33c-0814-4c92-9bf2-8eff61da9fb7" containerName="extract-utilities" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.419587 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="9670a33c-0814-4c92-9bf2-8eff61da9fb7" containerName="registry-server" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.427193 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.438340 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-cpvvv"] Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.494853 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9120f9e0-3133-40e9-9a18-8c960370c13a-registry-certificates\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.494889 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9120f9e0-3133-40e9-9a18-8c960370c13a-trusted-ca\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.494914 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9120f9e0-3133-40e9-9a18-8c960370c13a-registry-tls\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.494931 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmck2\" (UniqueName: \"kubernetes.io/projected/9120f9e0-3133-40e9-9a18-8c960370c13a-kube-api-access-kmck2\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.495096 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.495187 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9120f9e0-3133-40e9-9a18-8c960370c13a-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.495310 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9120f9e0-3133-40e9-9a18-8c960370c13a-bound-sa-token\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.495334 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9120f9e0-3133-40e9-9a18-8c960370c13a-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.512686 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.596379 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9120f9e0-3133-40e9-9a18-8c960370c13a-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.596482 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9120f9e0-3133-40e9-9a18-8c960370c13a-bound-sa-token\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.596506 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9120f9e0-3133-40e9-9a18-8c960370c13a-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.596530 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9120f9e0-3133-40e9-9a18-8c960370c13a-registry-certificates\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.596546 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9120f9e0-3133-40e9-9a18-8c960370c13a-trusted-ca\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.596571 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9120f9e0-3133-40e9-9a18-8c960370c13a-registry-tls\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.596734 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kmck2\" (UniqueName: \"kubernetes.io/projected/9120f9e0-3133-40e9-9a18-8c960370c13a-kube-api-access-kmck2\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.597039 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9120f9e0-3133-40e9-9a18-8c960370c13a-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.598415 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9120f9e0-3133-40e9-9a18-8c960370c13a-trusted-ca\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.598560 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9120f9e0-3133-40e9-9a18-8c960370c13a-registry-certificates\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.603123 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9120f9e0-3133-40e9-9a18-8c960370c13a-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.611106 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9120f9e0-3133-40e9-9a18-8c960370c13a-registry-tls\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.614868 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9120f9e0-3133-40e9-9a18-8c960370c13a-bound-sa-token\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.615110 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmck2\" (UniqueName: \"kubernetes.io/projected/9120f9e0-3133-40e9-9a18-8c960370c13a-kube-api-access-kmck2\") pod \"image-registry-5d9d95bf5b-cpvvv\" (UID: \"9120f9e0-3133-40e9-9a18-8c960370c13a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.742525 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:22 crc kubenswrapper[5112]: I1208 17:53:22.939265 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-cpvvv"] Dec 08 17:53:23 crc kubenswrapper[5112]: I1208 17:53:23.177457 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" event={"ID":"9120f9e0-3133-40e9-9a18-8c960370c13a","Type":"ContainerStarted","Data":"92d3f4bc7dad6cbf6d57ba01357fcb9c256393362c040cc23b0ef6add34c9f3e"} Dec 08 17:53:23 crc kubenswrapper[5112]: I1208 17:53:23.177498 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" event={"ID":"9120f9e0-3133-40e9-9a18-8c960370c13a","Type":"ContainerStarted","Data":"6b1b8aaf2c82a414fe52860c30cbd302823422dd7395de564a8bb26be081df5b"} Dec 08 17:53:23 crc kubenswrapper[5112]: I1208 17:53:23.198932 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" podStartSLOduration=1.19891604 podStartE2EDuration="1.19891604s" podCreationTimestamp="2025-12-08 17:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:53:23.198458517 +0000 UTC m=+780.208007208" watchObservedRunningTime="2025-12-08 17:53:23.19891604 +0000 UTC m=+780.208464741" Dec 08 17:53:23 crc kubenswrapper[5112]: I1208 17:53:23.338653 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9670a33c-0814-4c92-9bf2-8eff61da9fb7" path="/var/lib/kubelet/pods/9670a33c-0814-4c92-9bf2-8eff61da9fb7/volumes" Dec 08 17:53:24 crc kubenswrapper[5112]: I1208 17:53:24.182094 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.075827 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j"] Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.101921 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j"] Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.102074 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.104478 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.234038 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n96rt\" (UniqueName: \"kubernetes.io/projected/b80824bc-0db1-44d0-b177-2e3c27c5818c-kube-api-access-n96rt\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j\" (UID: \"b80824bc-0db1-44d0-b177-2e3c27c5818c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.234192 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b80824bc-0db1-44d0-b177-2e3c27c5818c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j\" (UID: \"b80824bc-0db1-44d0-b177-2e3c27c5818c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.234271 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b80824bc-0db1-44d0-b177-2e3c27c5818c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j\" (UID: \"b80824bc-0db1-44d0-b177-2e3c27c5818c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.335463 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n96rt\" (UniqueName: \"kubernetes.io/projected/b80824bc-0db1-44d0-b177-2e3c27c5818c-kube-api-access-n96rt\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j\" (UID: \"b80824bc-0db1-44d0-b177-2e3c27c5818c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.335522 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b80824bc-0db1-44d0-b177-2e3c27c5818c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j\" (UID: \"b80824bc-0db1-44d0-b177-2e3c27c5818c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.335561 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b80824bc-0db1-44d0-b177-2e3c27c5818c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j\" (UID: \"b80824bc-0db1-44d0-b177-2e3c27c5818c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.336598 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b80824bc-0db1-44d0-b177-2e3c27c5818c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j\" (UID: \"b80824bc-0db1-44d0-b177-2e3c27c5818c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.337073 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b80824bc-0db1-44d0-b177-2e3c27c5818c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j\" (UID: \"b80824bc-0db1-44d0-b177-2e3c27c5818c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.373186 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n96rt\" (UniqueName: \"kubernetes.io/projected/b80824bc-0db1-44d0-b177-2e3c27c5818c-kube-api-access-n96rt\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j\" (UID: \"b80824bc-0db1-44d0-b177-2e3c27c5818c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.416240 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" Dec 08 17:53:25 crc kubenswrapper[5112]: I1208 17:53:25.776103 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j"] Dec 08 17:53:26 crc kubenswrapper[5112]: I1208 17:53:26.201305 5112 generic.go:358] "Generic (PLEG): container finished" podID="b80824bc-0db1-44d0-b177-2e3c27c5818c" containerID="235da5666fb34896875698e9f4f8d35b632009d9fe5cce91cdbfb5caa7e1e436" exitCode=0 Dec 08 17:53:26 crc kubenswrapper[5112]: I1208 17:53:26.201378 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" event={"ID":"b80824bc-0db1-44d0-b177-2e3c27c5818c","Type":"ContainerDied","Data":"235da5666fb34896875698e9f4f8d35b632009d9fe5cce91cdbfb5caa7e1e436"} Dec 08 17:53:26 crc kubenswrapper[5112]: I1208 17:53:26.201861 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" event={"ID":"b80824bc-0db1-44d0-b177-2e3c27c5818c","Type":"ContainerStarted","Data":"55564813a1ae31b0e00e6a6a2c2f837517331733bb8de5aa5ad5eccd09e58fe3"} Dec 08 17:53:27 crc kubenswrapper[5112]: I1208 17:53:27.610550 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p2nq5"] Dec 08 17:53:27 crc kubenswrapper[5112]: I1208 17:53:27.643505 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p2nq5"] Dec 08 17:53:27 crc kubenswrapper[5112]: I1208 17:53:27.643673 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:27 crc kubenswrapper[5112]: I1208 17:53:27.770355 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed4c161b-4118-44a3-bb05-62672bf0c9c2-catalog-content\") pod \"redhat-operators-p2nq5\" (UID: \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\") " pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:27 crc kubenswrapper[5112]: I1208 17:53:27.770428 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5c5x\" (UniqueName: \"kubernetes.io/projected/ed4c161b-4118-44a3-bb05-62672bf0c9c2-kube-api-access-q5c5x\") pod \"redhat-operators-p2nq5\" (UID: \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\") " pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:27 crc kubenswrapper[5112]: I1208 17:53:27.770739 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed4c161b-4118-44a3-bb05-62672bf0c9c2-utilities\") pod \"redhat-operators-p2nq5\" (UID: \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\") " pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:27 crc kubenswrapper[5112]: I1208 17:53:27.872125 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5c5x\" (UniqueName: \"kubernetes.io/projected/ed4c161b-4118-44a3-bb05-62672bf0c9c2-kube-api-access-q5c5x\") pod \"redhat-operators-p2nq5\" (UID: \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\") " pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:27 crc kubenswrapper[5112]: I1208 17:53:27.872275 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed4c161b-4118-44a3-bb05-62672bf0c9c2-utilities\") pod \"redhat-operators-p2nq5\" (UID: \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\") " pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:27 crc kubenswrapper[5112]: I1208 17:53:27.872311 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed4c161b-4118-44a3-bb05-62672bf0c9c2-catalog-content\") pod \"redhat-operators-p2nq5\" (UID: \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\") " pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:27 crc kubenswrapper[5112]: I1208 17:53:27.872846 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed4c161b-4118-44a3-bb05-62672bf0c9c2-utilities\") pod \"redhat-operators-p2nq5\" (UID: \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\") " pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:27 crc kubenswrapper[5112]: I1208 17:53:27.872868 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed4c161b-4118-44a3-bb05-62672bf0c9c2-catalog-content\") pod \"redhat-operators-p2nq5\" (UID: \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\") " pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:27 crc kubenswrapper[5112]: I1208 17:53:27.893594 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5c5x\" (UniqueName: \"kubernetes.io/projected/ed4c161b-4118-44a3-bb05-62672bf0c9c2-kube-api-access-q5c5x\") pod \"redhat-operators-p2nq5\" (UID: \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\") " pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:27 crc kubenswrapper[5112]: I1208 17:53:27.964491 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:28 crc kubenswrapper[5112]: I1208 17:53:28.224548 5112 generic.go:358] "Generic (PLEG): container finished" podID="b80824bc-0db1-44d0-b177-2e3c27c5818c" containerID="2e05979e1d788c0efe48bfb76d552e2f1e75b4917b2bd122f9caa2d9c82e1ae2" exitCode=0 Dec 08 17:53:28 crc kubenswrapper[5112]: I1208 17:53:28.224652 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" event={"ID":"b80824bc-0db1-44d0-b177-2e3c27c5818c","Type":"ContainerDied","Data":"2e05979e1d788c0efe48bfb76d552e2f1e75b4917b2bd122f9caa2d9c82e1ae2"} Dec 08 17:53:28 crc kubenswrapper[5112]: I1208 17:53:28.432926 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p2nq5"] Dec 08 17:53:28 crc kubenswrapper[5112]: W1208 17:53:28.437048 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded4c161b_4118_44a3_bb05_62672bf0c9c2.slice/crio-8df2616a97072831a6f496cfa917100bfea207d616f03b69ae9092ef04d3269e WatchSource:0}: Error finding container 8df2616a97072831a6f496cfa917100bfea207d616f03b69ae9092ef04d3269e: Status 404 returned error can't find the container with id 8df2616a97072831a6f496cfa917100bfea207d616f03b69ae9092ef04d3269e Dec 08 17:53:29 crc kubenswrapper[5112]: I1208 17:53:29.232425 5112 generic.go:358] "Generic (PLEG): container finished" podID="ed4c161b-4118-44a3-bb05-62672bf0c9c2" containerID="dd8cc635878cddab5e8e609b7e9d4e997655a5bd809b434236e9230b4d00f67e" exitCode=0 Dec 08 17:53:29 crc kubenswrapper[5112]: I1208 17:53:29.232480 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2nq5" event={"ID":"ed4c161b-4118-44a3-bb05-62672bf0c9c2","Type":"ContainerDied","Data":"dd8cc635878cddab5e8e609b7e9d4e997655a5bd809b434236e9230b4d00f67e"} Dec 08 17:53:29 crc kubenswrapper[5112]: I1208 17:53:29.232869 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2nq5" event={"ID":"ed4c161b-4118-44a3-bb05-62672bf0c9c2","Type":"ContainerStarted","Data":"8df2616a97072831a6f496cfa917100bfea207d616f03b69ae9092ef04d3269e"} Dec 08 17:53:29 crc kubenswrapper[5112]: I1208 17:53:29.237991 5112 generic.go:358] "Generic (PLEG): container finished" podID="b80824bc-0db1-44d0-b177-2e3c27c5818c" containerID="1c42821920758851212556196a1100dbc856673a0472c51663307ce75be64f9a" exitCode=0 Dec 08 17:53:29 crc kubenswrapper[5112]: I1208 17:53:29.238042 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" event={"ID":"b80824bc-0db1-44d0-b177-2e3c27c5818c","Type":"ContainerDied","Data":"1c42821920758851212556196a1100dbc856673a0472c51663307ce75be64f9a"} Dec 08 17:53:30 crc kubenswrapper[5112]: I1208 17:53:30.292057 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2nq5" event={"ID":"ed4c161b-4118-44a3-bb05-62672bf0c9c2","Type":"ContainerStarted","Data":"fff5c12b4fc65ed125a9806378301da22bbb72d500ff6787c425fd67fec62c48"} Dec 08 17:53:30 crc kubenswrapper[5112]: I1208 17:53:30.677489 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" Dec 08 17:53:30 crc kubenswrapper[5112]: I1208 17:53:30.809959 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b80824bc-0db1-44d0-b177-2e3c27c5818c-bundle\") pod \"b80824bc-0db1-44d0-b177-2e3c27c5818c\" (UID: \"b80824bc-0db1-44d0-b177-2e3c27c5818c\") " Dec 08 17:53:30 crc kubenswrapper[5112]: I1208 17:53:30.810040 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n96rt\" (UniqueName: \"kubernetes.io/projected/b80824bc-0db1-44d0-b177-2e3c27c5818c-kube-api-access-n96rt\") pod \"b80824bc-0db1-44d0-b177-2e3c27c5818c\" (UID: \"b80824bc-0db1-44d0-b177-2e3c27c5818c\") " Dec 08 17:53:30 crc kubenswrapper[5112]: I1208 17:53:30.810173 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b80824bc-0db1-44d0-b177-2e3c27c5818c-util\") pod \"b80824bc-0db1-44d0-b177-2e3c27c5818c\" (UID: \"b80824bc-0db1-44d0-b177-2e3c27c5818c\") " Dec 08 17:53:30 crc kubenswrapper[5112]: I1208 17:53:30.812643 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b80824bc-0db1-44d0-b177-2e3c27c5818c-bundle" (OuterVolumeSpecName: "bundle") pod "b80824bc-0db1-44d0-b177-2e3c27c5818c" (UID: "b80824bc-0db1-44d0-b177-2e3c27c5818c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:30 crc kubenswrapper[5112]: I1208 17:53:30.823587 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b80824bc-0db1-44d0-b177-2e3c27c5818c-kube-api-access-n96rt" (OuterVolumeSpecName: "kube-api-access-n96rt") pod "b80824bc-0db1-44d0-b177-2e3c27c5818c" (UID: "b80824bc-0db1-44d0-b177-2e3c27c5818c"). InnerVolumeSpecName "kube-api-access-n96rt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:53:30 crc kubenswrapper[5112]: I1208 17:53:30.825988 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b80824bc-0db1-44d0-b177-2e3c27c5818c-util" (OuterVolumeSpecName: "util") pod "b80824bc-0db1-44d0-b177-2e3c27c5818c" (UID: "b80824bc-0db1-44d0-b177-2e3c27c5818c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:30 crc kubenswrapper[5112]: I1208 17:53:30.911958 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n96rt\" (UniqueName: \"kubernetes.io/projected/b80824bc-0db1-44d0-b177-2e3c27c5818c-kube-api-access-n96rt\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:30 crc kubenswrapper[5112]: I1208 17:53:30.911988 5112 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b80824bc-0db1-44d0-b177-2e3c27c5818c-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:30 crc kubenswrapper[5112]: I1208 17:53:30.911998 5112 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b80824bc-0db1-44d0-b177-2e3c27c5818c-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:31 crc kubenswrapper[5112]: I1208 17:53:31.300479 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" event={"ID":"b80824bc-0db1-44d0-b177-2e3c27c5818c","Type":"ContainerDied","Data":"55564813a1ae31b0e00e6a6a2c2f837517331733bb8de5aa5ad5eccd09e58fe3"} Dec 08 17:53:31 crc kubenswrapper[5112]: I1208 17:53:31.300516 5112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55564813a1ae31b0e00e6a6a2c2f837517331733bb8de5aa5ad5eccd09e58fe3" Dec 08 17:53:31 crc kubenswrapper[5112]: I1208 17:53:31.300521 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210p848j" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.313310 5112 generic.go:358] "Generic (PLEG): container finished" podID="ed4c161b-4118-44a3-bb05-62672bf0c9c2" containerID="fff5c12b4fc65ed125a9806378301da22bbb72d500ff6787c425fd67fec62c48" exitCode=0 Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.313382 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2nq5" event={"ID":"ed4c161b-4118-44a3-bb05-62672bf0c9c2","Type":"ContainerDied","Data":"fff5c12b4fc65ed125a9806378301da22bbb72d500ff6787c425fd67fec62c48"} Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.458928 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc"] Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.459776 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b80824bc-0db1-44d0-b177-2e3c27c5818c" containerName="extract" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.459798 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80824bc-0db1-44d0-b177-2e3c27c5818c" containerName="extract" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.459811 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b80824bc-0db1-44d0-b177-2e3c27c5818c" containerName="pull" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.459818 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80824bc-0db1-44d0-b177-2e3c27c5818c" containerName="pull" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.459831 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b80824bc-0db1-44d0-b177-2e3c27c5818c" containerName="util" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.459837 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80824bc-0db1-44d0-b177-2e3c27c5818c" containerName="util" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.459938 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="b80824bc-0db1-44d0-b177-2e3c27c5818c" containerName="extract" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.464074 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.466678 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.473045 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc"] Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.637374 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35757c09-dfca-47ae-8fb9-55137174aad0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc\" (UID: \"35757c09-dfca-47ae-8fb9-55137174aad0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.637594 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc4dw\" (UniqueName: \"kubernetes.io/projected/35757c09-dfca-47ae-8fb9-55137174aad0-kube-api-access-tc4dw\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc\" (UID: \"35757c09-dfca-47ae-8fb9-55137174aad0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.637646 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35757c09-dfca-47ae-8fb9-55137174aad0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc\" (UID: \"35757c09-dfca-47ae-8fb9-55137174aad0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.739749 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35757c09-dfca-47ae-8fb9-55137174aad0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc\" (UID: \"35757c09-dfca-47ae-8fb9-55137174aad0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.739845 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tc4dw\" (UniqueName: \"kubernetes.io/projected/35757c09-dfca-47ae-8fb9-55137174aad0-kube-api-access-tc4dw\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc\" (UID: \"35757c09-dfca-47ae-8fb9-55137174aad0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.739883 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35757c09-dfca-47ae-8fb9-55137174aad0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc\" (UID: \"35757c09-dfca-47ae-8fb9-55137174aad0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.740318 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35757c09-dfca-47ae-8fb9-55137174aad0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc\" (UID: \"35757c09-dfca-47ae-8fb9-55137174aad0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.740425 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35757c09-dfca-47ae-8fb9-55137174aad0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc\" (UID: \"35757c09-dfca-47ae-8fb9-55137174aad0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.759960 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc4dw\" (UniqueName: \"kubernetes.io/projected/35757c09-dfca-47ae-8fb9-55137174aad0-kube-api-access-tc4dw\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc\" (UID: \"35757c09-dfca-47ae-8fb9-55137174aad0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" Dec 08 17:53:32 crc kubenswrapper[5112]: I1208 17:53:32.785543 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.247911 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg"] Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.258047 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.261446 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg"] Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.281954 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfmb5\" (UniqueName: \"kubernetes.io/projected/852296c7-946f-4494-8e75-e5245a85c97f-kube-api-access-rfmb5\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg\" (UID: \"852296c7-946f-4494-8e75-e5245a85c97f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.282034 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/852296c7-946f-4494-8e75-e5245a85c97f-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg\" (UID: \"852296c7-946f-4494-8e75-e5245a85c97f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.282058 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/852296c7-946f-4494-8e75-e5245a85c97f-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg\" (UID: \"852296c7-946f-4494-8e75-e5245a85c97f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.330932 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2nq5" event={"ID":"ed4c161b-4118-44a3-bb05-62672bf0c9c2","Type":"ContainerStarted","Data":"ae1ba453c90b2830d544d2b22620f9eebafe57cc537ee700882e7d642e08a754"} Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.348029 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc"] Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.351803 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p2nq5" podStartSLOduration=5.763097969 podStartE2EDuration="6.351784157s" podCreationTimestamp="2025-12-08 17:53:27 +0000 UTC" firstStartedPulling="2025-12-08 17:53:29.233426139 +0000 UTC m=+786.242974850" lastFinishedPulling="2025-12-08 17:53:29.822112337 +0000 UTC m=+786.831661038" observedRunningTime="2025-12-08 17:53:33.349154955 +0000 UTC m=+790.358703666" watchObservedRunningTime="2025-12-08 17:53:33.351784157 +0000 UTC m=+790.361332858" Dec 08 17:53:33 crc kubenswrapper[5112]: W1208 17:53:33.357482 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35757c09_dfca_47ae_8fb9_55137174aad0.slice/crio-60a7585855c30eeb3e302e12da6958097a5d3924dab392b38cc43aa95a729d07 WatchSource:0}: Error finding container 60a7585855c30eeb3e302e12da6958097a5d3924dab392b38cc43aa95a729d07: Status 404 returned error can't find the container with id 60a7585855c30eeb3e302e12da6958097a5d3924dab392b38cc43aa95a729d07 Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.383227 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/852296c7-946f-4494-8e75-e5245a85c97f-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg\" (UID: \"852296c7-946f-4494-8e75-e5245a85c97f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.383272 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/852296c7-946f-4494-8e75-e5245a85c97f-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg\" (UID: \"852296c7-946f-4494-8e75-e5245a85c97f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.383786 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/852296c7-946f-4494-8e75-e5245a85c97f-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg\" (UID: \"852296c7-946f-4494-8e75-e5245a85c97f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.383837 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/852296c7-946f-4494-8e75-e5245a85c97f-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg\" (UID: \"852296c7-946f-4494-8e75-e5245a85c97f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.383907 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rfmb5\" (UniqueName: \"kubernetes.io/projected/852296c7-946f-4494-8e75-e5245a85c97f-kube-api-access-rfmb5\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg\" (UID: \"852296c7-946f-4494-8e75-e5245a85c97f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.402231 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfmb5\" (UniqueName: \"kubernetes.io/projected/852296c7-946f-4494-8e75-e5245a85c97f-kube-api-access-rfmb5\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg\" (UID: \"852296c7-946f-4494-8e75-e5245a85c97f\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" Dec 08 17:53:33 crc kubenswrapper[5112]: I1208 17:53:33.574336 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" Dec 08 17:53:34 crc kubenswrapper[5112]: I1208 17:53:34.066188 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg"] Dec 08 17:53:34 crc kubenswrapper[5112]: W1208 17:53:34.069322 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod852296c7_946f_4494_8e75_e5245a85c97f.slice/crio-afbb6052705c9391363414d3300286c15f638ba8dbf33a0239340a1caa867222 WatchSource:0}: Error finding container afbb6052705c9391363414d3300286c15f638ba8dbf33a0239340a1caa867222: Status 404 returned error can't find the container with id afbb6052705c9391363414d3300286c15f638ba8dbf33a0239340a1caa867222 Dec 08 17:53:34 crc kubenswrapper[5112]: I1208 17:53:34.381405 5112 generic.go:358] "Generic (PLEG): container finished" podID="35757c09-dfca-47ae-8fb9-55137174aad0" containerID="0b58226d7b8c8f3fba77d835b5181dd841ea2ec68113f61d87b74b9aa6078ab2" exitCode=0 Dec 08 17:53:34 crc kubenswrapper[5112]: I1208 17:53:34.381608 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" event={"ID":"35757c09-dfca-47ae-8fb9-55137174aad0","Type":"ContainerDied","Data":"0b58226d7b8c8f3fba77d835b5181dd841ea2ec68113f61d87b74b9aa6078ab2"} Dec 08 17:53:34 crc kubenswrapper[5112]: I1208 17:53:34.382173 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" event={"ID":"35757c09-dfca-47ae-8fb9-55137174aad0","Type":"ContainerStarted","Data":"60a7585855c30eeb3e302e12da6958097a5d3924dab392b38cc43aa95a729d07"} Dec 08 17:53:34 crc kubenswrapper[5112]: I1208 17:53:34.383576 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" event={"ID":"852296c7-946f-4494-8e75-e5245a85c97f","Type":"ContainerStarted","Data":"9d7affbbe6da58158e64ab55313115833a895745c6dfa364fe0aecddc175c359"} Dec 08 17:53:34 crc kubenswrapper[5112]: I1208 17:53:34.383615 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" event={"ID":"852296c7-946f-4494-8e75-e5245a85c97f","Type":"ContainerStarted","Data":"afbb6052705c9391363414d3300286c15f638ba8dbf33a0239340a1caa867222"} Dec 08 17:53:35 crc kubenswrapper[5112]: I1208 17:53:35.402527 5112 generic.go:358] "Generic (PLEG): container finished" podID="852296c7-946f-4494-8e75-e5245a85c97f" containerID="9d7affbbe6da58158e64ab55313115833a895745c6dfa364fe0aecddc175c359" exitCode=0 Dec 08 17:53:35 crc kubenswrapper[5112]: I1208 17:53:35.402620 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" event={"ID":"852296c7-946f-4494-8e75-e5245a85c97f","Type":"ContainerDied","Data":"9d7affbbe6da58158e64ab55313115833a895745c6dfa364fe0aecddc175c359"} Dec 08 17:53:37 crc kubenswrapper[5112]: I1208 17:53:37.452422 5112 generic.go:358] "Generic (PLEG): container finished" podID="35757c09-dfca-47ae-8fb9-55137174aad0" containerID="8ddee5b1f01a5bdfe0c68d4694c889f129c95c1b5f96be80633446ab72f3691d" exitCode=0 Dec 08 17:53:37 crc kubenswrapper[5112]: I1208 17:53:37.452560 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" event={"ID":"35757c09-dfca-47ae-8fb9-55137174aad0","Type":"ContainerDied","Data":"8ddee5b1f01a5bdfe0c68d4694c889f129c95c1b5f96be80633446ab72f3691d"} Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.002626 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.013338 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.049272 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4gccg"] Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.057036 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.059779 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4gccg"] Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.205771 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-utilities\") pod \"certified-operators-4gccg\" (UID: \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\") " pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.205829 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-catalog-content\") pod \"certified-operators-4gccg\" (UID: \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\") " pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.206063 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4rwt\" (UniqueName: \"kubernetes.io/projected/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-kube-api-access-c4rwt\") pod \"certified-operators-4gccg\" (UID: \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\") " pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.307497 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c4rwt\" (UniqueName: \"kubernetes.io/projected/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-kube-api-access-c4rwt\") pod \"certified-operators-4gccg\" (UID: \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\") " pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.307588 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-utilities\") pod \"certified-operators-4gccg\" (UID: \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\") " pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.307619 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-catalog-content\") pod \"certified-operators-4gccg\" (UID: \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\") " pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.308149 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-catalog-content\") pod \"certified-operators-4gccg\" (UID: \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\") " pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.308348 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-utilities\") pod \"certified-operators-4gccg\" (UID: \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\") " pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.462286 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4rwt\" (UniqueName: \"kubernetes.io/projected/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-kube-api-access-c4rwt\") pod \"certified-operators-4gccg\" (UID: \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\") " pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.469843 5112 generic.go:358] "Generic (PLEG): container finished" podID="35757c09-dfca-47ae-8fb9-55137174aad0" containerID="16a5b7f77870e13fe30da0882a3f0779fcccf651ed8d63797674462337dbbbc6" exitCode=0 Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.470437 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" event={"ID":"35757c09-dfca-47ae-8fb9-55137174aad0","Type":"ContainerDied","Data":"16a5b7f77870e13fe30da0882a3f0779fcccf651ed8d63797674462337dbbbc6"} Dec 08 17:53:38 crc kubenswrapper[5112]: I1208 17:53:38.673598 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:39 crc kubenswrapper[5112]: I1208 17:53:39.089116 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p2nq5" podUID="ed4c161b-4118-44a3-bb05-62672bf0c9c2" containerName="registry-server" probeResult="failure" output=< Dec 08 17:53:39 crc kubenswrapper[5112]: timeout: failed to connect service ":50051" within 1s Dec 08 17:53:39 crc kubenswrapper[5112]: > Dec 08 17:53:39 crc kubenswrapper[5112]: I1208 17:53:39.266830 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4gccg"] Dec 08 17:53:39 crc kubenswrapper[5112]: I1208 17:53:39.476593 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4gccg" event={"ID":"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb","Type":"ContainerStarted","Data":"7331bdd979f29f3133da166832feb907b055d7b4e2db83d275e0ad38764d6409"} Dec 08 17:53:39 crc kubenswrapper[5112]: I1208 17:53:39.995613 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.026758 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35757c09-dfca-47ae-8fb9-55137174aad0-util\") pod \"35757c09-dfca-47ae-8fb9-55137174aad0\" (UID: \"35757c09-dfca-47ae-8fb9-55137174aad0\") " Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.026799 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35757c09-dfca-47ae-8fb9-55137174aad0-bundle\") pod \"35757c09-dfca-47ae-8fb9-55137174aad0\" (UID: \"35757c09-dfca-47ae-8fb9-55137174aad0\") " Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.026851 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc4dw\" (UniqueName: \"kubernetes.io/projected/35757c09-dfca-47ae-8fb9-55137174aad0-kube-api-access-tc4dw\") pod \"35757c09-dfca-47ae-8fb9-55137174aad0\" (UID: \"35757c09-dfca-47ae-8fb9-55137174aad0\") " Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.027784 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35757c09-dfca-47ae-8fb9-55137174aad0-bundle" (OuterVolumeSpecName: "bundle") pod "35757c09-dfca-47ae-8fb9-55137174aad0" (UID: "35757c09-dfca-47ae-8fb9-55137174aad0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.046602 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35757c09-dfca-47ae-8fb9-55137174aad0-kube-api-access-tc4dw" (OuterVolumeSpecName: "kube-api-access-tc4dw") pod "35757c09-dfca-47ae-8fb9-55137174aad0" (UID: "35757c09-dfca-47ae-8fb9-55137174aad0"). InnerVolumeSpecName "kube-api-access-tc4dw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.048731 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35757c09-dfca-47ae-8fb9-55137174aad0-util" (OuterVolumeSpecName: "util") pod "35757c09-dfca-47ae-8fb9-55137174aad0" (UID: "35757c09-dfca-47ae-8fb9-55137174aad0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.128216 5112 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35757c09-dfca-47ae-8fb9-55137174aad0-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.128257 5112 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35757c09-dfca-47ae-8fb9-55137174aad0-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.128269 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tc4dw\" (UniqueName: \"kubernetes.io/projected/35757c09-dfca-47ae-8fb9-55137174aad0-kube-api-access-tc4dw\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.490458 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" event={"ID":"35757c09-dfca-47ae-8fb9-55137174aad0","Type":"ContainerDied","Data":"60a7585855c30eeb3e302e12da6958097a5d3924dab392b38cc43aa95a729d07"} Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.490699 5112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60a7585855c30eeb3e302e12da6958097a5d3924dab392b38cc43aa95a729d07" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.490845 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ffj9wc" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.501156 5112 generic.go:358] "Generic (PLEG): container finished" podID="7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" containerID="57359817e8324595152c21a8c6d8ffe3118a2624c81a3b425c9af24c9d9c2bc9" exitCode=0 Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.501243 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4gccg" event={"ID":"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb","Type":"ContainerDied","Data":"57359817e8324595152c21a8c6d8ffe3118a2624c81a3b425c9af24c9d9c2bc9"} Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.584474 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd"] Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.585297 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35757c09-dfca-47ae-8fb9-55137174aad0" containerName="pull" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.585316 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="35757c09-dfca-47ae-8fb9-55137174aad0" containerName="pull" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.585330 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35757c09-dfca-47ae-8fb9-55137174aad0" containerName="extract" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.585338 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="35757c09-dfca-47ae-8fb9-55137174aad0" containerName="extract" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.585354 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35757c09-dfca-47ae-8fb9-55137174aad0" containerName="util" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.585361 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="35757c09-dfca-47ae-8fb9-55137174aad0" containerName="util" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.585457 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="35757c09-dfca-47ae-8fb9-55137174aad0" containerName="extract" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.659125 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd"] Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.659341 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.753165 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/568f2ee0-3266-4392-b432-9c7deb6b0422-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd\" (UID: \"568f2ee0-3266-4392-b432-9c7deb6b0422\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.753489 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/568f2ee0-3266-4392-b432-9c7deb6b0422-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd\" (UID: \"568f2ee0-3266-4392-b432-9c7deb6b0422\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.753599 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntp7q\" (UniqueName: \"kubernetes.io/projected/568f2ee0-3266-4392-b432-9c7deb6b0422-kube-api-access-ntp7q\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd\" (UID: \"568f2ee0-3266-4392-b432-9c7deb6b0422\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.855172 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/568f2ee0-3266-4392-b432-9c7deb6b0422-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd\" (UID: \"568f2ee0-3266-4392-b432-9c7deb6b0422\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.855276 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/568f2ee0-3266-4392-b432-9c7deb6b0422-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd\" (UID: \"568f2ee0-3266-4392-b432-9c7deb6b0422\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.855311 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ntp7q\" (UniqueName: \"kubernetes.io/projected/568f2ee0-3266-4392-b432-9c7deb6b0422-kube-api-access-ntp7q\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd\" (UID: \"568f2ee0-3266-4392-b432-9c7deb6b0422\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.855767 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/568f2ee0-3266-4392-b432-9c7deb6b0422-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd\" (UID: \"568f2ee0-3266-4392-b432-9c7deb6b0422\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.856069 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/568f2ee0-3266-4392-b432-9c7deb6b0422-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd\" (UID: \"568f2ee0-3266-4392-b432-9c7deb6b0422\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.894242 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntp7q\" (UniqueName: \"kubernetes.io/projected/568f2ee0-3266-4392-b432-9c7deb6b0422-kube-api-access-ntp7q\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd\" (UID: \"568f2ee0-3266-4392-b432-9c7deb6b0422\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" Dec 08 17:53:40 crc kubenswrapper[5112]: I1208 17:53:40.973519 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" Dec 08 17:53:41 crc kubenswrapper[5112]: I1208 17:53:41.352179 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd"] Dec 08 17:53:41 crc kubenswrapper[5112]: W1208 17:53:41.370170 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod568f2ee0_3266_4392_b432_9c7deb6b0422.slice/crio-8943c644dfa5fe70f4683106b4b7f41ea7797638157f7becd682834383bf3b9b WatchSource:0}: Error finding container 8943c644dfa5fe70f4683106b4b7f41ea7797638157f7becd682834383bf3b9b: Status 404 returned error can't find the container with id 8943c644dfa5fe70f4683106b4b7f41ea7797638157f7becd682834383bf3b9b Dec 08 17:53:41 crc kubenswrapper[5112]: I1208 17:53:41.508030 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" event={"ID":"568f2ee0-3266-4392-b432-9c7deb6b0422","Type":"ContainerStarted","Data":"8943c644dfa5fe70f4683106b4b7f41ea7797638157f7becd682834383bf3b9b"} Dec 08 17:53:41 crc kubenswrapper[5112]: I1208 17:53:41.707496 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:53:41 crc kubenswrapper[5112]: I1208 17:53:41.707559 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.515134 5112 generic.go:358] "Generic (PLEG): container finished" podID="568f2ee0-3266-4392-b432-9c7deb6b0422" containerID="5e79c559ea1eee6c69ff17611fd2ba78083e300b94c4bc77ea27062f1ee5f193" exitCode=0 Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.515253 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" event={"ID":"568f2ee0-3266-4392-b432-9c7deb6b0422","Type":"ContainerDied","Data":"5e79c559ea1eee6c69ff17611fd2ba78083e300b94c4bc77ea27062f1ee5f193"} Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.519533 5112 generic.go:358] "Generic (PLEG): container finished" podID="7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" containerID="24a58afe8046d8946d1d426214ef6c86ae9e2cba5ca23f6f1a3f13dc7cca77bb" exitCode=0 Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.519638 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4gccg" event={"ID":"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb","Type":"ContainerDied","Data":"24a58afe8046d8946d1d426214ef6c86ae9e2cba5ca23f6f1a3f13dc7cca77bb"} Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.660975 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-m4gvm"] Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.691707 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-m4gvm" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.696644 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-45dhq\"" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.696895 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.697018 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.698738 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-m4gvm"] Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.718336 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9"] Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.726786 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.729092 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-x2jfd\"" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.731744 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.759621 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9"] Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.764418 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.765042 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9"] Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.771198 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9"] Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.784728 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqx9m\" (UniqueName: \"kubernetes.io/projected/8a394832-d3c8-43cf-8f57-c4615f527daf-kube-api-access-wqx9m\") pod \"obo-prometheus-operator-86648f486b-m4gvm\" (UID: \"8a394832-d3c8-43cf-8f57-c4615f527daf\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-m4gvm" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.784825 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cf76964-3724-4ad9-b132-b8d54bef7ab6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9\" (UID: \"4cf76964-3724-4ad9-b132-b8d54bef7ab6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.784854 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cf76964-3724-4ad9-b132-b8d54bef7ab6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9\" (UID: \"4cf76964-3724-4ad9-b132-b8d54bef7ab6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.886601 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wqx9m\" (UniqueName: \"kubernetes.io/projected/8a394832-d3c8-43cf-8f57-c4615f527daf-kube-api-access-wqx9m\") pod \"obo-prometheus-operator-86648f486b-m4gvm\" (UID: \"8a394832-d3c8-43cf-8f57-c4615f527daf\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-m4gvm" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.886657 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e45aa7ca-ac03-483b-9817-6f4597deb2f5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9\" (UID: \"e45aa7ca-ac03-483b-9817-6f4597deb2f5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.886704 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cf76964-3724-4ad9-b132-b8d54bef7ab6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9\" (UID: \"4cf76964-3724-4ad9-b132-b8d54bef7ab6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.886722 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cf76964-3724-4ad9-b132-b8d54bef7ab6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9\" (UID: \"4cf76964-3724-4ad9-b132-b8d54bef7ab6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.886744 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e45aa7ca-ac03-483b-9817-6f4597deb2f5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9\" (UID: \"e45aa7ca-ac03-483b-9817-6f4597deb2f5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.893221 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cf76964-3724-4ad9-b132-b8d54bef7ab6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9\" (UID: \"4cf76964-3724-4ad9-b132-b8d54bef7ab6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.902107 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cf76964-3724-4ad9-b132-b8d54bef7ab6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9\" (UID: \"4cf76964-3724-4ad9-b132-b8d54bef7ab6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.909033 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqx9m\" (UniqueName: \"kubernetes.io/projected/8a394832-d3c8-43cf-8f57-c4615f527daf-kube-api-access-wqx9m\") pod \"obo-prometheus-operator-86648f486b-m4gvm\" (UID: \"8a394832-d3c8-43cf-8f57-c4615f527daf\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-m4gvm" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.938577 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-t6qd6"] Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.947357 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-t6qd6" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.949660 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-s98rn\"" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.950007 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.961574 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-t6qd6"] Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.992911 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e45aa7ca-ac03-483b-9817-6f4597deb2f5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9\" (UID: \"e45aa7ca-ac03-483b-9817-6f4597deb2f5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.993574 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e45aa7ca-ac03-483b-9817-6f4597deb2f5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9\" (UID: \"e45aa7ca-ac03-483b-9817-6f4597deb2f5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9" Dec 08 17:53:42 crc kubenswrapper[5112]: I1208 17:53:42.999669 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e45aa7ca-ac03-483b-9817-6f4597deb2f5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9\" (UID: \"e45aa7ca-ac03-483b-9817-6f4597deb2f5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.002618 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e45aa7ca-ac03-483b-9817-6f4597deb2f5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9\" (UID: \"e45aa7ca-ac03-483b-9817-6f4597deb2f5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.026042 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-m4gvm" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.057099 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.094539 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-tj4k9"] Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.094798 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0849d3b3-1a57-4c5f-88b4-7b2b9278c70f-observability-operator-tls\") pod \"observability-operator-78c97476f4-t6qd6\" (UID: \"0849d3b3-1a57-4c5f-88b4-7b2b9278c70f\") " pod="openshift-operators/observability-operator-78c97476f4-t6qd6" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.094831 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xdcg\" (UniqueName: \"kubernetes.io/projected/0849d3b3-1a57-4c5f-88b4-7b2b9278c70f-kube-api-access-5xdcg\") pod \"observability-operator-78c97476f4-t6qd6\" (UID: \"0849d3b3-1a57-4c5f-88b4-7b2b9278c70f\") " pod="openshift-operators/observability-operator-78c97476f4-t6qd6" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.098492 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.117449 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-tj4k9"] Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.117626 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-tj4k9" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.120621 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-89zqd\"" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.195880 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmnl7\" (UniqueName: \"kubernetes.io/projected/6787c278-01cc-4348-85d1-5619e1df4e1e-kube-api-access-rmnl7\") pod \"perses-operator-68bdb49cbf-tj4k9\" (UID: \"6787c278-01cc-4348-85d1-5619e1df4e1e\") " pod="openshift-operators/perses-operator-68bdb49cbf-tj4k9" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.195959 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6787c278-01cc-4348-85d1-5619e1df4e1e-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-tj4k9\" (UID: \"6787c278-01cc-4348-85d1-5619e1df4e1e\") " pod="openshift-operators/perses-operator-68bdb49cbf-tj4k9" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.196036 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0849d3b3-1a57-4c5f-88b4-7b2b9278c70f-observability-operator-tls\") pod \"observability-operator-78c97476f4-t6qd6\" (UID: \"0849d3b3-1a57-4c5f-88b4-7b2b9278c70f\") " pod="openshift-operators/observability-operator-78c97476f4-t6qd6" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.196057 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5xdcg\" (UniqueName: \"kubernetes.io/projected/0849d3b3-1a57-4c5f-88b4-7b2b9278c70f-kube-api-access-5xdcg\") pod \"observability-operator-78c97476f4-t6qd6\" (UID: \"0849d3b3-1a57-4c5f-88b4-7b2b9278c70f\") " pod="openshift-operators/observability-operator-78c97476f4-t6qd6" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.215172 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/0849d3b3-1a57-4c5f-88b4-7b2b9278c70f-observability-operator-tls\") pod \"observability-operator-78c97476f4-t6qd6\" (UID: \"0849d3b3-1a57-4c5f-88b4-7b2b9278c70f\") " pod="openshift-operators/observability-operator-78c97476f4-t6qd6" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.215771 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xdcg\" (UniqueName: \"kubernetes.io/projected/0849d3b3-1a57-4c5f-88b4-7b2b9278c70f-kube-api-access-5xdcg\") pod \"observability-operator-78c97476f4-t6qd6\" (UID: \"0849d3b3-1a57-4c5f-88b4-7b2b9278c70f\") " pod="openshift-operators/observability-operator-78c97476f4-t6qd6" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.274344 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-t6qd6" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.297663 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6787c278-01cc-4348-85d1-5619e1df4e1e-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-tj4k9\" (UID: \"6787c278-01cc-4348-85d1-5619e1df4e1e\") " pod="openshift-operators/perses-operator-68bdb49cbf-tj4k9" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.298015 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmnl7\" (UniqueName: \"kubernetes.io/projected/6787c278-01cc-4348-85d1-5619e1df4e1e-kube-api-access-rmnl7\") pod \"perses-operator-68bdb49cbf-tj4k9\" (UID: \"6787c278-01cc-4348-85d1-5619e1df4e1e\") " pod="openshift-operators/perses-operator-68bdb49cbf-tj4k9" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.298878 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/6787c278-01cc-4348-85d1-5619e1df4e1e-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-tj4k9\" (UID: \"6787c278-01cc-4348-85d1-5619e1df4e1e\") " pod="openshift-operators/perses-operator-68bdb49cbf-tj4k9" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.335236 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmnl7\" (UniqueName: \"kubernetes.io/projected/6787c278-01cc-4348-85d1-5619e1df4e1e-kube-api-access-rmnl7\") pod \"perses-operator-68bdb49cbf-tj4k9\" (UID: \"6787c278-01cc-4348-85d1-5619e1df4e1e\") " pod="openshift-operators/perses-operator-68bdb49cbf-tj4k9" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.376268 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-m4gvm"] Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.449485 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-tj4k9" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.492287 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9"] Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.544360 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-m4gvm" event={"ID":"8a394832-d3c8-43cf-8f57-c4615f527daf","Type":"ContainerStarted","Data":"949f5e80b19d616105149fd98fb3556224126125401b4113475f002a88f5bb41"} Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.554257 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4gccg" event={"ID":"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb","Type":"ContainerStarted","Data":"5c21db7abf4eb2e022cfbbd105d48d823a2f1cfdba198705b7f7335cdbbe6f1f"} Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.577217 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4gccg" podStartSLOduration=5.032541081 podStartE2EDuration="5.577197478s" podCreationTimestamp="2025-12-08 17:53:38 +0000 UTC" firstStartedPulling="2025-12-08 17:53:40.502136169 +0000 UTC m=+797.511684870" lastFinishedPulling="2025-12-08 17:53:41.046792566 +0000 UTC m=+798.056341267" observedRunningTime="2025-12-08 17:53:43.576483349 +0000 UTC m=+800.586032070" watchObservedRunningTime="2025-12-08 17:53:43.577197478 +0000 UTC m=+800.586746179" Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.750916 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9"] Dec 08 17:53:43 crc kubenswrapper[5112]: W1208 17:53:43.766130 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cf76964_3724_4ad9_b132_b8d54bef7ab6.slice/crio-71ed60ecd93bd5c92bcd8bc86553184498519897ab726fc4467481c9876ca6f5 WatchSource:0}: Error finding container 71ed60ecd93bd5c92bcd8bc86553184498519897ab726fc4467481c9876ca6f5: Status 404 returned error can't find the container with id 71ed60ecd93bd5c92bcd8bc86553184498519897ab726fc4467481c9876ca6f5 Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.868771 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-t6qd6"] Dec 08 17:53:43 crc kubenswrapper[5112]: I1208 17:53:43.981046 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-tj4k9"] Dec 08 17:53:44 crc kubenswrapper[5112]: I1208 17:53:44.560613 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9" event={"ID":"e45aa7ca-ac03-483b-9817-6f4597deb2f5","Type":"ContainerStarted","Data":"28b89b32d43643ac69dd35efa39eae8f3234640ef7c278fe55d049ebfce3399c"} Dec 08 17:53:44 crc kubenswrapper[5112]: I1208 17:53:44.561566 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9" event={"ID":"4cf76964-3724-4ad9-b132-b8d54bef7ab6","Type":"ContainerStarted","Data":"71ed60ecd93bd5c92bcd8bc86553184498519897ab726fc4467481c9876ca6f5"} Dec 08 17:53:44 crc kubenswrapper[5112]: I1208 17:53:44.563469 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-tj4k9" event={"ID":"6787c278-01cc-4348-85d1-5619e1df4e1e","Type":"ContainerStarted","Data":"231c4fbdb15fb2a18ee8ad680f2f5eb87b9335a46a3fffd85c665ea0b3adfc3f"} Dec 08 17:53:44 crc kubenswrapper[5112]: I1208 17:53:44.566486 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-t6qd6" event={"ID":"0849d3b3-1a57-4c5f-88b4-7b2b9278c70f","Type":"ContainerStarted","Data":"72351733daf15524d64a30a6dcae696399820c9cb08370ab079b4c1909049f6e"} Dec 08 17:53:45 crc kubenswrapper[5112]: I1208 17:53:45.194113 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-cpvvv" Dec 08 17:53:45 crc kubenswrapper[5112]: I1208 17:53:45.399738 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-vpxb8"] Dec 08 17:53:46 crc kubenswrapper[5112]: I1208 17:53:46.833961 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dzvnm"] Dec 08 17:53:46 crc kubenswrapper[5112]: I1208 17:53:46.848406 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:53:46 crc kubenswrapper[5112]: I1208 17:53:46.851827 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dzvnm"] Dec 08 17:53:46 crc kubenswrapper[5112]: I1208 17:53:46.999620 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c402130e-f913-4588-9b7c-862415a55ca3-catalog-content\") pod \"community-operators-dzvnm\" (UID: \"c402130e-f913-4588-9b7c-862415a55ca3\") " pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:53:46 crc kubenswrapper[5112]: I1208 17:53:46.999670 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58dqb\" (UniqueName: \"kubernetes.io/projected/c402130e-f913-4588-9b7c-862415a55ca3-kube-api-access-58dqb\") pod \"community-operators-dzvnm\" (UID: \"c402130e-f913-4588-9b7c-862415a55ca3\") " pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:53:46 crc kubenswrapper[5112]: I1208 17:53:46.999714 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c402130e-f913-4588-9b7c-862415a55ca3-utilities\") pod \"community-operators-dzvnm\" (UID: \"c402130e-f913-4588-9b7c-862415a55ca3\") " pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:53:47 crc kubenswrapper[5112]: I1208 17:53:47.101124 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c402130e-f913-4588-9b7c-862415a55ca3-utilities\") pod \"community-operators-dzvnm\" (UID: \"c402130e-f913-4588-9b7c-862415a55ca3\") " pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:53:47 crc kubenswrapper[5112]: I1208 17:53:47.101215 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c402130e-f913-4588-9b7c-862415a55ca3-catalog-content\") pod \"community-operators-dzvnm\" (UID: \"c402130e-f913-4588-9b7c-862415a55ca3\") " pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:53:47 crc kubenswrapper[5112]: I1208 17:53:47.101242 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-58dqb\" (UniqueName: \"kubernetes.io/projected/c402130e-f913-4588-9b7c-862415a55ca3-kube-api-access-58dqb\") pod \"community-operators-dzvnm\" (UID: \"c402130e-f913-4588-9b7c-862415a55ca3\") " pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:53:47 crc kubenswrapper[5112]: I1208 17:53:47.102042 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c402130e-f913-4588-9b7c-862415a55ca3-utilities\") pod \"community-operators-dzvnm\" (UID: \"c402130e-f913-4588-9b7c-862415a55ca3\") " pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:53:47 crc kubenswrapper[5112]: I1208 17:53:47.102290 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c402130e-f913-4588-9b7c-862415a55ca3-catalog-content\") pod \"community-operators-dzvnm\" (UID: \"c402130e-f913-4588-9b7c-862415a55ca3\") " pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:53:47 crc kubenswrapper[5112]: I1208 17:53:47.124396 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-58dqb\" (UniqueName: \"kubernetes.io/projected/c402130e-f913-4588-9b7c-862415a55ca3-kube-api-access-58dqb\") pod \"community-operators-dzvnm\" (UID: \"c402130e-f913-4588-9b7c-862415a55ca3\") " pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:53:47 crc kubenswrapper[5112]: I1208 17:53:47.168458 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:53:48 crc kubenswrapper[5112]: I1208 17:53:48.027538 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:48 crc kubenswrapper[5112]: I1208 17:53:48.122054 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:48 crc kubenswrapper[5112]: I1208 17:53:48.674159 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:48 crc kubenswrapper[5112]: I1208 17:53:48.674245 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:48 crc kubenswrapper[5112]: I1208 17:53:48.741128 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:49 crc kubenswrapper[5112]: I1208 17:53:49.715464 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:51 crc kubenswrapper[5112]: I1208 17:53:51.000123 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p2nq5"] Dec 08 17:53:51 crc kubenswrapper[5112]: I1208 17:53:51.000612 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p2nq5" podUID="ed4c161b-4118-44a3-bb05-62672bf0c9c2" containerName="registry-server" containerID="cri-o://ae1ba453c90b2830d544d2b22620f9eebafe57cc537ee700882e7d642e08a754" gracePeriod=2 Dec 08 17:53:51 crc kubenswrapper[5112]: I1208 17:53:51.601093 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4gccg"] Dec 08 17:53:51 crc kubenswrapper[5112]: I1208 17:53:51.653805 5112 generic.go:358] "Generic (PLEG): container finished" podID="ed4c161b-4118-44a3-bb05-62672bf0c9c2" containerID="ae1ba453c90b2830d544d2b22620f9eebafe57cc537ee700882e7d642e08a754" exitCode=0 Dec 08 17:53:51 crc kubenswrapper[5112]: I1208 17:53:51.653904 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2nq5" event={"ID":"ed4c161b-4118-44a3-bb05-62672bf0c9c2","Type":"ContainerDied","Data":"ae1ba453c90b2830d544d2b22620f9eebafe57cc537ee700882e7d642e08a754"} Dec 08 17:53:51 crc kubenswrapper[5112]: I1208 17:53:51.654351 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4gccg" podUID="7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" containerName="registry-server" containerID="cri-o://5c21db7abf4eb2e022cfbbd105d48d823a2f1cfdba198705b7f7335cdbbe6f1f" gracePeriod=2 Dec 08 17:53:52 crc kubenswrapper[5112]: I1208 17:53:52.660338 5112 generic.go:358] "Generic (PLEG): container finished" podID="7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" containerID="5c21db7abf4eb2e022cfbbd105d48d823a2f1cfdba198705b7f7335cdbbe6f1f" exitCode=0 Dec 08 17:53:52 crc kubenswrapper[5112]: I1208 17:53:52.660473 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4gccg" event={"ID":"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb","Type":"ContainerDied","Data":"5c21db7abf4eb2e022cfbbd105d48d823a2f1cfdba198705b7f7335cdbbe6f1f"} Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.098332 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.104629 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.193401 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-utilities\") pod \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\" (UID: \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\") " Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.193475 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4rwt\" (UniqueName: \"kubernetes.io/projected/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-kube-api-access-c4rwt\") pod \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\" (UID: \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\") " Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.193533 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed4c161b-4118-44a3-bb05-62672bf0c9c2-catalog-content\") pod \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\" (UID: \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\") " Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.193722 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed4c161b-4118-44a3-bb05-62672bf0c9c2-utilities\") pod \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\" (UID: \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\") " Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.193752 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-catalog-content\") pod \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\" (UID: \"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb\") " Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.193822 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5c5x\" (UniqueName: \"kubernetes.io/projected/ed4c161b-4118-44a3-bb05-62672bf0c9c2-kube-api-access-q5c5x\") pod \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\" (UID: \"ed4c161b-4118-44a3-bb05-62672bf0c9c2\") " Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.194582 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-utilities" (OuterVolumeSpecName: "utilities") pod "7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" (UID: "7c12d6b8-4ec0-4e64-91eb-be8ded1445bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.202842 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed4c161b-4118-44a3-bb05-62672bf0c9c2-utilities" (OuterVolumeSpecName: "utilities") pod "ed4c161b-4118-44a3-bb05-62672bf0c9c2" (UID: "ed4c161b-4118-44a3-bb05-62672bf0c9c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.221032 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed4c161b-4118-44a3-bb05-62672bf0c9c2-kube-api-access-q5c5x" (OuterVolumeSpecName: "kube-api-access-q5c5x") pod "ed4c161b-4118-44a3-bb05-62672bf0c9c2" (UID: "ed4c161b-4118-44a3-bb05-62672bf0c9c2"). InnerVolumeSpecName "kube-api-access-q5c5x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.227734 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" (UID: "7c12d6b8-4ec0-4e64-91eb-be8ded1445bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.229968 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-kube-api-access-c4rwt" (OuterVolumeSpecName: "kube-api-access-c4rwt") pod "7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" (UID: "7c12d6b8-4ec0-4e64-91eb-be8ded1445bb"). InnerVolumeSpecName "kube-api-access-c4rwt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.295425 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c4rwt\" (UniqueName: \"kubernetes.io/projected/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-kube-api-access-c4rwt\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.295460 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed4c161b-4118-44a3-bb05-62672bf0c9c2-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.295471 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.295479 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q5c5x\" (UniqueName: \"kubernetes.io/projected/ed4c161b-4118-44a3-bb05-62672bf0c9c2-kube-api-access-q5c5x\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.295487 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.302860 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed4c161b-4118-44a3-bb05-62672bf0c9c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed4c161b-4118-44a3-bb05-62672bf0c9c2" (UID: "ed4c161b-4118-44a3-bb05-62672bf0c9c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.397128 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed4c161b-4118-44a3-bb05-62672bf0c9c2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.695812 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4gccg" event={"ID":"7c12d6b8-4ec0-4e64-91eb-be8ded1445bb","Type":"ContainerDied","Data":"7331bdd979f29f3133da166832feb907b055d7b4e2db83d275e0ad38764d6409"} Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.695881 5112 scope.go:117] "RemoveContainer" containerID="5c21db7abf4eb2e022cfbbd105d48d823a2f1cfdba198705b7f7335cdbbe6f1f" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.696030 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4gccg" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.701548 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p2nq5" event={"ID":"ed4c161b-4118-44a3-bb05-62672bf0c9c2","Type":"ContainerDied","Data":"8df2616a97072831a6f496cfa917100bfea207d616f03b69ae9092ef04d3269e"} Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.701638 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p2nq5" Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.722138 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4gccg"] Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.729302 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4gccg"] Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.731111 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p2nq5"] Dec 08 17:53:57 crc kubenswrapper[5112]: I1208 17:53:57.737795 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p2nq5"] Dec 08 17:53:59 crc kubenswrapper[5112]: I1208 17:53:59.325065 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" path="/var/lib/kubelet/pods/7c12d6b8-4ec0-4e64-91eb-be8ded1445bb/volumes" Dec 08 17:53:59 crc kubenswrapper[5112]: I1208 17:53:59.325807 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed4c161b-4118-44a3-bb05-62672bf0c9c2" path="/var/lib/kubelet/pods/ed4c161b-4118-44a3-bb05-62672bf0c9c2/volumes" Dec 08 17:54:03 crc kubenswrapper[5112]: I1208 17:54:03.957944 5112 scope.go:117] "RemoveContainer" containerID="24a58afe8046d8946d1d426214ef6c86ae9e2cba5ca23f6f1a3f13dc7cca77bb" Dec 08 17:54:03 crc kubenswrapper[5112]: I1208 17:54:03.987288 5112 scope.go:117] "RemoveContainer" containerID="57359817e8324595152c21a8c6d8ffe3118a2624c81a3b425c9af24c9d9c2bc9" Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.024984 5112 scope.go:117] "RemoveContainer" containerID="ae1ba453c90b2830d544d2b22620f9eebafe57cc537ee700882e7d642e08a754" Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.088328 5112 scope.go:117] "RemoveContainer" containerID="fff5c12b4fc65ed125a9806378301da22bbb72d500ff6787c425fd67fec62c48" Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.202436 5112 scope.go:117] "RemoveContainer" containerID="dd8cc635878cddab5e8e609b7e9d4e997655a5bd809b434236e9230b4d00f67e" Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.271909 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dzvnm"] Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.742463 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-tj4k9" event={"ID":"6787c278-01cc-4348-85d1-5619e1df4e1e","Type":"ContainerStarted","Data":"a306b1a4868b1bfc1c487e115eea71fb3d8004519664b4910dca7fd26c9ef12c"} Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.743769 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-tj4k9" Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.744799 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-m4gvm" event={"ID":"8a394832-d3c8-43cf-8f57-c4615f527daf","Type":"ContainerStarted","Data":"40dd7431abd5510730fcc191d2bc8a4dbfb0e68208f22d6ee42d459553c31ae8"} Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.747059 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-t6qd6" event={"ID":"0849d3b3-1a57-4c5f-88b4-7b2b9278c70f","Type":"ContainerStarted","Data":"e63b249404fa75f4f565ee3be12edd5fddb345d4197a4158d1886730f71f6905"} Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.747568 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-t6qd6" Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.749115 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9" event={"ID":"e45aa7ca-ac03-483b-9817-6f4597deb2f5","Type":"ContainerStarted","Data":"da06ce544f20b81f2694afa47641f3ef3d588e6f843d0c70a42a14aab9d3b4f5"} Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.751108 5112 generic.go:358] "Generic (PLEG): container finished" podID="c402130e-f913-4588-9b7c-862415a55ca3" containerID="89097155fe8751777110f21cd3ff71ae592022e68864f7bff65971c6d3a21081" exitCode=0 Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.751173 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzvnm" event={"ID":"c402130e-f913-4588-9b7c-862415a55ca3","Type":"ContainerDied","Data":"89097155fe8751777110f21cd3ff71ae592022e68864f7bff65971c6d3a21081"} Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.751189 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzvnm" event={"ID":"c402130e-f913-4588-9b7c-862415a55ca3","Type":"ContainerStarted","Data":"9ef5fd80a1768f80225e4401d422b8c19db6a0eeabc77a0ff8b0e37ac67614e8"} Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.755657 5112 generic.go:358] "Generic (PLEG): container finished" podID="852296c7-946f-4494-8e75-e5245a85c97f" containerID="7e299649569b1ef4ba0fa5c69a04f6cb8980aba3ec5f819301e33c9d6fe2ddab" exitCode=0 Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.755709 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" event={"ID":"852296c7-946f-4494-8e75-e5245a85c97f","Type":"ContainerDied","Data":"7e299649569b1ef4ba0fa5c69a04f6cb8980aba3ec5f819301e33c9d6fe2ddab"} Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.757203 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9" event={"ID":"4cf76964-3724-4ad9-b132-b8d54bef7ab6","Type":"ContainerStarted","Data":"cb8c8af9bd6def315a4b70d2248664f8d7aec07b894edb97b13c80362fa86257"} Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.760875 5112 generic.go:358] "Generic (PLEG): container finished" podID="568f2ee0-3266-4392-b432-9c7deb6b0422" containerID="0e45e88141903ce1595d50e0c90db35dda6f3af1e098895c8104407d38b13668" exitCode=0 Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.760993 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" event={"ID":"568f2ee0-3266-4392-b432-9c7deb6b0422","Type":"ContainerDied","Data":"0e45e88141903ce1595d50e0c90db35dda6f3af1e098895c8104407d38b13668"} Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.768254 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-tj4k9" podStartSLOduration=1.7912638539999999 podStartE2EDuration="21.768234002s" podCreationTimestamp="2025-12-08 17:53:43 +0000 UTC" firstStartedPulling="2025-12-08 17:53:44.009009987 +0000 UTC m=+801.018558688" lastFinishedPulling="2025-12-08 17:54:03.985980135 +0000 UTC m=+820.995528836" observedRunningTime="2025-12-08 17:54:04.766452493 +0000 UTC m=+821.776001184" watchObservedRunningTime="2025-12-08 17:54:04.768234002 +0000 UTC m=+821.777782703" Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.781299 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-t6qd6" Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.791845 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-t6qd6" podStartSLOduration=2.685688064 podStartE2EDuration="22.791827625s" podCreationTimestamp="2025-12-08 17:53:42 +0000 UTC" firstStartedPulling="2025-12-08 17:53:43.882104575 +0000 UTC m=+800.891653276" lastFinishedPulling="2025-12-08 17:54:03.988244136 +0000 UTC m=+820.997792837" observedRunningTime="2025-12-08 17:54:04.79053415 +0000 UTC m=+821.800082851" watchObservedRunningTime="2025-12-08 17:54:04.791827625 +0000 UTC m=+821.801376326" Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.831141 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-2mgl9" podStartSLOduration=2.604981963 podStartE2EDuration="22.831127227s" podCreationTimestamp="2025-12-08 17:53:42 +0000 UTC" firstStartedPulling="2025-12-08 17:53:43.773459932 +0000 UTC m=+800.783008633" lastFinishedPulling="2025-12-08 17:54:03.999605196 +0000 UTC m=+821.009153897" observedRunningTime="2025-12-08 17:54:04.829245776 +0000 UTC m=+821.838794477" watchObservedRunningTime="2025-12-08 17:54:04.831127227 +0000 UTC m=+821.840675918" Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.863438 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-59fcf9fbff-m79m9" podStartSLOduration=2.411853465 podStartE2EDuration="22.863419158s" podCreationTimestamp="2025-12-08 17:53:42 +0000 UTC" firstStartedPulling="2025-12-08 17:53:43.53546147 +0000 UTC m=+800.545010181" lastFinishedPulling="2025-12-08 17:54:03.987027173 +0000 UTC m=+820.996575874" observedRunningTime="2025-12-08 17:54:04.860873199 +0000 UTC m=+821.870421920" watchObservedRunningTime="2025-12-08 17:54:04.863419158 +0000 UTC m=+821.872967869" Dec 08 17:54:04 crc kubenswrapper[5112]: I1208 17:54:04.892603 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-m4gvm" podStartSLOduration=2.311623541 podStartE2EDuration="22.892578864s" podCreationTimestamp="2025-12-08 17:53:42 +0000 UTC" firstStartedPulling="2025-12-08 17:53:43.406172923 +0000 UTC m=+800.415721624" lastFinishedPulling="2025-12-08 17:54:03.987128246 +0000 UTC m=+820.996676947" observedRunningTime="2025-12-08 17:54:04.883235869 +0000 UTC m=+821.892784580" watchObservedRunningTime="2025-12-08 17:54:04.892578864 +0000 UTC m=+821.902127565" Dec 08 17:54:05 crc kubenswrapper[5112]: I1208 17:54:05.767877 5112 generic.go:358] "Generic (PLEG): container finished" podID="c402130e-f913-4588-9b7c-862415a55ca3" containerID="18b90c9970a955ba23cf581f487af50b10ef503e55817760c7f867179b4d3af7" exitCode=0 Dec 08 17:54:05 crc kubenswrapper[5112]: I1208 17:54:05.768071 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzvnm" event={"ID":"c402130e-f913-4588-9b7c-862415a55ca3","Type":"ContainerDied","Data":"18b90c9970a955ba23cf581f487af50b10ef503e55817760c7f867179b4d3af7"} Dec 08 17:54:05 crc kubenswrapper[5112]: I1208 17:54:05.772776 5112 generic.go:358] "Generic (PLEG): container finished" podID="852296c7-946f-4494-8e75-e5245a85c97f" containerID="70f9f7f7ddc961a6d05167c0808fdb8b965e019db64f554ec3b439b88b416767" exitCode=0 Dec 08 17:54:05 crc kubenswrapper[5112]: I1208 17:54:05.772865 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" event={"ID":"852296c7-946f-4494-8e75-e5245a85c97f","Type":"ContainerDied","Data":"70f9f7f7ddc961a6d05167c0808fdb8b965e019db64f554ec3b439b88b416767"} Dec 08 17:54:05 crc kubenswrapper[5112]: I1208 17:54:05.788682 5112 generic.go:358] "Generic (PLEG): container finished" podID="568f2ee0-3266-4392-b432-9c7deb6b0422" containerID="620882853b7753626f02e710e4d8e94fadc9abfdc19e1fc90c743e586e464c6a" exitCode=0 Dec 08 17:54:05 crc kubenswrapper[5112]: I1208 17:54:05.789902 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" event={"ID":"568f2ee0-3266-4392-b432-9c7deb6b0422","Type":"ContainerDied","Data":"620882853b7753626f02e710e4d8e94fadc9abfdc19e1fc90c743e586e464c6a"} Dec 08 17:54:06 crc kubenswrapper[5112]: I1208 17:54:06.798607 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzvnm" event={"ID":"c402130e-f913-4588-9b7c-862415a55ca3","Type":"ContainerStarted","Data":"d842472fbe86dc5359c3ed4cf09f1b8c2c37ef245bc5bb3301d8c56f143f9577"} Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.053581 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.079790 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dzvnm" podStartSLOduration=20.479725318 podStartE2EDuration="21.079769225s" podCreationTimestamp="2025-12-08 17:53:46 +0000 UTC" firstStartedPulling="2025-12-08 17:54:04.751729442 +0000 UTC m=+821.761278143" lastFinishedPulling="2025-12-08 17:54:05.351773339 +0000 UTC m=+822.361322050" observedRunningTime="2025-12-08 17:54:06.828026098 +0000 UTC m=+823.837574819" watchObservedRunningTime="2025-12-08 17:54:07.079769225 +0000 UTC m=+824.089317946" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.157213 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/568f2ee0-3266-4392-b432-9c7deb6b0422-util\") pod \"568f2ee0-3266-4392-b432-9c7deb6b0422\" (UID: \"568f2ee0-3266-4392-b432-9c7deb6b0422\") " Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.157305 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/568f2ee0-3266-4392-b432-9c7deb6b0422-bundle\") pod \"568f2ee0-3266-4392-b432-9c7deb6b0422\" (UID: \"568f2ee0-3266-4392-b432-9c7deb6b0422\") " Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.157428 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntp7q\" (UniqueName: \"kubernetes.io/projected/568f2ee0-3266-4392-b432-9c7deb6b0422-kube-api-access-ntp7q\") pod \"568f2ee0-3266-4392-b432-9c7deb6b0422\" (UID: \"568f2ee0-3266-4392-b432-9c7deb6b0422\") " Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.159225 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568f2ee0-3266-4392-b432-9c7deb6b0422-bundle" (OuterVolumeSpecName: "bundle") pod "568f2ee0-3266-4392-b432-9c7deb6b0422" (UID: "568f2ee0-3266-4392-b432-9c7deb6b0422"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.164828 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568f2ee0-3266-4392-b432-9c7deb6b0422-util" (OuterVolumeSpecName: "util") pod "568f2ee0-3266-4392-b432-9c7deb6b0422" (UID: "568f2ee0-3266-4392-b432-9c7deb6b0422"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.165538 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568f2ee0-3266-4392-b432-9c7deb6b0422-kube-api-access-ntp7q" (OuterVolumeSpecName: "kube-api-access-ntp7q") pod "568f2ee0-3266-4392-b432-9c7deb6b0422" (UID: "568f2ee0-3266-4392-b432-9c7deb6b0422"). InnerVolumeSpecName "kube-api-access-ntp7q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.169507 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.170135 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.204154 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.258397 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfmb5\" (UniqueName: \"kubernetes.io/projected/852296c7-946f-4494-8e75-e5245a85c97f-kube-api-access-rfmb5\") pod \"852296c7-946f-4494-8e75-e5245a85c97f\" (UID: \"852296c7-946f-4494-8e75-e5245a85c97f\") " Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.258547 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/852296c7-946f-4494-8e75-e5245a85c97f-bundle\") pod \"852296c7-946f-4494-8e75-e5245a85c97f\" (UID: \"852296c7-946f-4494-8e75-e5245a85c97f\") " Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.258608 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/852296c7-946f-4494-8e75-e5245a85c97f-util\") pod \"852296c7-946f-4494-8e75-e5245a85c97f\" (UID: \"852296c7-946f-4494-8e75-e5245a85c97f\") " Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.258922 5112 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/568f2ee0-3266-4392-b432-9c7deb6b0422-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.258943 5112 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/568f2ee0-3266-4392-b432-9c7deb6b0422-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.258956 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ntp7q\" (UniqueName: \"kubernetes.io/projected/568f2ee0-3266-4392-b432-9c7deb6b0422-kube-api-access-ntp7q\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.259799 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852296c7-946f-4494-8e75-e5245a85c97f-bundle" (OuterVolumeSpecName: "bundle") pod "852296c7-946f-4494-8e75-e5245a85c97f" (UID: "852296c7-946f-4494-8e75-e5245a85c97f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.264403 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/852296c7-946f-4494-8e75-e5245a85c97f-kube-api-access-rfmb5" (OuterVolumeSpecName: "kube-api-access-rfmb5") pod "852296c7-946f-4494-8e75-e5245a85c97f" (UID: "852296c7-946f-4494-8e75-e5245a85c97f"). InnerVolumeSpecName "kube-api-access-rfmb5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.270120 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852296c7-946f-4494-8e75-e5245a85c97f-util" (OuterVolumeSpecName: "util") pod "852296c7-946f-4494-8e75-e5245a85c97f" (UID: "852296c7-946f-4494-8e75-e5245a85c97f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.360274 5112 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/852296c7-946f-4494-8e75-e5245a85c97f-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.360326 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rfmb5\" (UniqueName: \"kubernetes.io/projected/852296c7-946f-4494-8e75-e5245a85c97f-kube-api-access-rfmb5\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.360338 5112 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/852296c7-946f-4494-8e75-e5245a85c97f-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.807377 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" event={"ID":"852296c7-946f-4494-8e75-e5245a85c97f","Type":"ContainerDied","Data":"afbb6052705c9391363414d3300286c15f638ba8dbf33a0239340a1caa867222"} Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.807423 5112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afbb6052705c9391363414d3300286c15f638ba8dbf33a0239340a1caa867222" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.807445 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5edjlwg" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.809255 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.809256 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931adgddd" event={"ID":"568f2ee0-3266-4392-b432-9c7deb6b0422","Type":"ContainerDied","Data":"8943c644dfa5fe70f4683106b4b7f41ea7797638157f7becd682834383bf3b9b"} Dec 08 17:54:07 crc kubenswrapper[5112]: I1208 17:54:07.809294 5112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8943c644dfa5fe70f4683106b4b7f41ea7797638157f7becd682834383bf3b9b" Dec 08 17:54:08 crc kubenswrapper[5112]: I1208 17:54:08.221611 5112 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dzvnm" podUID="c402130e-f913-4588-9b7c-862415a55ca3" containerName="registry-server" probeResult="failure" output=< Dec 08 17:54:08 crc kubenswrapper[5112]: timeout: failed to connect service ":50051" within 1s Dec 08 17:54:08 crc kubenswrapper[5112]: > Dec 08 17:54:10 crc kubenswrapper[5112]: I1208 17:54:10.504559 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" podUID="2ea5f194-6a0d-4339-9c15-bde6d3ca1540" containerName="registry" containerID="cri-o://3f5fd0a020aac31cd78f6c9bf9e4ad429957baa32a1a5a56cf140afb4e534d94" gracePeriod=30 Dec 08 17:54:10 crc kubenswrapper[5112]: I1208 17:54:10.841237 5112 generic.go:358] "Generic (PLEG): container finished" podID="2ea5f194-6a0d-4339-9c15-bde6d3ca1540" containerID="3f5fd0a020aac31cd78f6c9bf9e4ad429957baa32a1a5a56cf140afb4e534d94" exitCode=0 Dec 08 17:54:10 crc kubenswrapper[5112]: I1208 17:54:10.841759 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" event={"ID":"2ea5f194-6a0d-4339-9c15-bde6d3ca1540","Type":"ContainerDied","Data":"3f5fd0a020aac31cd78f6c9bf9e4ad429957baa32a1a5a56cf140afb4e534d94"} Dec 08 17:54:10 crc kubenswrapper[5112]: I1208 17:54:10.899620 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.008042 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clc4d\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-kube-api-access-clc4d\") pod \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.008203 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-installation-pull-secrets\") pod \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.008346 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.008378 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-registry-tls\") pod \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.008403 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-bound-sa-token\") pod \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.008422 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-trusted-ca\") pod \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.008506 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-ca-trust-extracted\") pod \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.008637 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-registry-certificates\") pod \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\" (UID: \"2ea5f194-6a0d-4339-9c15-bde6d3ca1540\") " Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.009423 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "2ea5f194-6a0d-4339-9c15-bde6d3ca1540" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.009796 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2ea5f194-6a0d-4339-9c15-bde6d3ca1540" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.013660 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "2ea5f194-6a0d-4339-9c15-bde6d3ca1540" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.013844 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "2ea5f194-6a0d-4339-9c15-bde6d3ca1540" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.020055 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "2ea5f194-6a0d-4339-9c15-bde6d3ca1540" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.021926 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "2ea5f194-6a0d-4339-9c15-bde6d3ca1540" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.035507 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "2ea5f194-6a0d-4339-9c15-bde6d3ca1540" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.037481 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-kube-api-access-clc4d" (OuterVolumeSpecName: "kube-api-access-clc4d") pod "2ea5f194-6a0d-4339-9c15-bde6d3ca1540" (UID: "2ea5f194-6a0d-4339-9c15-bde6d3ca1540"). InnerVolumeSpecName "kube-api-access-clc4d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.109493 5112 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.109534 5112 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.109545 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-clc4d\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-kube-api-access-clc4d\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.109553 5112 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.109563 5112 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.109571 5112 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.109578 5112 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ea5f194-6a0d-4339-9c15-bde6d3ca1540-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.581612 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-8hxcm"] Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582471 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed4c161b-4118-44a3-bb05-62672bf0c9c2" containerName="extract-content" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582490 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4c161b-4118-44a3-bb05-62672bf0c9c2" containerName="extract-content" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582505 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="568f2ee0-3266-4392-b432-9c7deb6b0422" containerName="extract" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582512 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="568f2ee0-3266-4392-b432-9c7deb6b0422" containerName="extract" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582522 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed4c161b-4118-44a3-bb05-62672bf0c9c2" containerName="extract-utilities" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582531 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4c161b-4118-44a3-bb05-62672bf0c9c2" containerName="extract-utilities" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582538 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" containerName="extract-content" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582544 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" containerName="extract-content" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582556 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="568f2ee0-3266-4392-b432-9c7deb6b0422" containerName="pull" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582561 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="568f2ee0-3266-4392-b432-9c7deb6b0422" containerName="pull" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582568 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="852296c7-946f-4494-8e75-e5245a85c97f" containerName="extract" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582576 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="852296c7-946f-4494-8e75-e5245a85c97f" containerName="extract" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582586 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed4c161b-4118-44a3-bb05-62672bf0c9c2" containerName="registry-server" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582594 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4c161b-4118-44a3-bb05-62672bf0c9c2" containerName="registry-server" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582610 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="852296c7-946f-4494-8e75-e5245a85c97f" containerName="util" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582617 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="852296c7-946f-4494-8e75-e5245a85c97f" containerName="util" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582624 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" containerName="extract-utilities" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582629 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" containerName="extract-utilities" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582636 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" containerName="registry-server" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582641 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" containerName="registry-server" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582649 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="568f2ee0-3266-4392-b432-9c7deb6b0422" containerName="util" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582653 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="568f2ee0-3266-4392-b432-9c7deb6b0422" containerName="util" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582666 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="852296c7-946f-4494-8e75-e5245a85c97f" containerName="pull" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582671 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="852296c7-946f-4494-8e75-e5245a85c97f" containerName="pull" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582679 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ea5f194-6a0d-4339-9c15-bde6d3ca1540" containerName="registry" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582686 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea5f194-6a0d-4339-9c15-bde6d3ca1540" containerName="registry" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582791 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="ed4c161b-4118-44a3-bb05-62672bf0c9c2" containerName="registry-server" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582800 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="7c12d6b8-4ec0-4e64-91eb-be8ded1445bb" containerName="registry-server" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582811 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="852296c7-946f-4494-8e75-e5245a85c97f" containerName="extract" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582825 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="568f2ee0-3266-4392-b432-9c7deb6b0422" containerName="extract" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.582833 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="2ea5f194-6a0d-4339-9c15-bde6d3ca1540" containerName="registry" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.706118 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.706202 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.998299 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" event={"ID":"2ea5f194-6a0d-4339-9c15-bde6d3ca1540","Type":"ContainerDied","Data":"40aceaf3237317514299eaa17381a2618c7ee9d07e97c546e7d8bc1f2f20907b"} Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.998361 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-8hxcm"] Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.998390 5112 scope.go:117] "RemoveContainer" containerID="3f5fd0a020aac31cd78f6c9bf9e4ad429957baa32a1a5a56cf140afb4e534d94" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.998418 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-vpxb8" Dec 08 17:54:11 crc kubenswrapper[5112]: I1208 17:54:11.998720 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-8hxcm" Dec 08 17:54:12 crc kubenswrapper[5112]: I1208 17:54:12.002469 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 08 17:54:12 crc kubenswrapper[5112]: I1208 17:54:12.002535 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 08 17:54:12 crc kubenswrapper[5112]: I1208 17:54:12.002540 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-p8bjl\"" Dec 08 17:54:12 crc kubenswrapper[5112]: I1208 17:54:12.034662 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-vpxb8"] Dec 08 17:54:12 crc kubenswrapper[5112]: I1208 17:54:12.039797 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-vpxb8"] Dec 08 17:54:12 crc kubenswrapper[5112]: I1208 17:54:12.121994 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4tz\" (UniqueName: \"kubernetes.io/projected/fe1f29a7-d254-48b4-b6ea-4076ded777ba-kube-api-access-7w4tz\") pod \"interconnect-operator-78b9bd8798-8hxcm\" (UID: \"fe1f29a7-d254-48b4-b6ea-4076ded777ba\") " pod="service-telemetry/interconnect-operator-78b9bd8798-8hxcm" Dec 08 17:54:12 crc kubenswrapper[5112]: I1208 17:54:12.222663 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7w4tz\" (UniqueName: \"kubernetes.io/projected/fe1f29a7-d254-48b4-b6ea-4076ded777ba-kube-api-access-7w4tz\") pod \"interconnect-operator-78b9bd8798-8hxcm\" (UID: \"fe1f29a7-d254-48b4-b6ea-4076ded777ba\") " pod="service-telemetry/interconnect-operator-78b9bd8798-8hxcm" Dec 08 17:54:12 crc kubenswrapper[5112]: I1208 17:54:12.240373 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w4tz\" (UniqueName: \"kubernetes.io/projected/fe1f29a7-d254-48b4-b6ea-4076ded777ba-kube-api-access-7w4tz\") pod \"interconnect-operator-78b9bd8798-8hxcm\" (UID: \"fe1f29a7-d254-48b4-b6ea-4076ded777ba\") " pod="service-telemetry/interconnect-operator-78b9bd8798-8hxcm" Dec 08 17:54:12 crc kubenswrapper[5112]: I1208 17:54:12.313235 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-8hxcm" Dec 08 17:54:12 crc kubenswrapper[5112]: I1208 17:54:12.901331 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-8hxcm"] Dec 08 17:54:12 crc kubenswrapper[5112]: W1208 17:54:12.904883 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1f29a7_d254_48b4_b6ea_4076ded777ba.slice/crio-f8cdb1ca661d4a5bb7532ed06c5f6bfb1ffb6c231127ac2b7eb8b448420e7f82 WatchSource:0}: Error finding container f8cdb1ca661d4a5bb7532ed06c5f6bfb1ffb6c231127ac2b7eb8b448420e7f82: Status 404 returned error can't find the container with id f8cdb1ca661d4a5bb7532ed06c5f6bfb1ffb6c231127ac2b7eb8b448420e7f82 Dec 08 17:54:13 crc kubenswrapper[5112]: I1208 17:54:13.324145 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea5f194-6a0d-4339-9c15-bde6d3ca1540" path="/var/lib/kubelet/pods/2ea5f194-6a0d-4339-9c15-bde6d3ca1540/volumes" Dec 08 17:54:13 crc kubenswrapper[5112]: I1208 17:54:13.863902 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-8hxcm" event={"ID":"fe1f29a7-d254-48b4-b6ea-4076ded777ba","Type":"ContainerStarted","Data":"f8cdb1ca661d4a5bb7532ed06c5f6bfb1ffb6c231127ac2b7eb8b448420e7f82"} Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.076928 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-6d678f5bbf-5xfzk"] Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.592415 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6d678f5bbf-5xfzk"] Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.592626 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.594604 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-6qqrb\"" Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.594662 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.649266 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cjhb\" (UniqueName: \"kubernetes.io/projected/aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4-kube-api-access-8cjhb\") pod \"elastic-operator-6d678f5bbf-5xfzk\" (UID: \"aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4\") " pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.649359 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4-apiservice-cert\") pod \"elastic-operator-6d678f5bbf-5xfzk\" (UID: \"aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4\") " pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.649551 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4-webhook-cert\") pod \"elastic-operator-6d678f5bbf-5xfzk\" (UID: \"aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4\") " pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.750998 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4-webhook-cert\") pod \"elastic-operator-6d678f5bbf-5xfzk\" (UID: \"aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4\") " pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.751118 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8cjhb\" (UniqueName: \"kubernetes.io/projected/aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4-kube-api-access-8cjhb\") pod \"elastic-operator-6d678f5bbf-5xfzk\" (UID: \"aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4\") " pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.751179 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4-apiservice-cert\") pod \"elastic-operator-6d678f5bbf-5xfzk\" (UID: \"aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4\") " pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.758556 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4-apiservice-cert\") pod \"elastic-operator-6d678f5bbf-5xfzk\" (UID: \"aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4\") " pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.758624 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4-webhook-cert\") pod \"elastic-operator-6d678f5bbf-5xfzk\" (UID: \"aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4\") " pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.769113 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cjhb\" (UniqueName: \"kubernetes.io/projected/aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4-kube-api-access-8cjhb\") pod \"elastic-operator-6d678f5bbf-5xfzk\" (UID: \"aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4\") " pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" Dec 08 17:54:14 crc kubenswrapper[5112]: I1208 17:54:14.915603 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" Dec 08 17:54:15 crc kubenswrapper[5112]: I1208 17:54:15.152420 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6d678f5bbf-5xfzk"] Dec 08 17:54:15 crc kubenswrapper[5112]: I1208 17:54:15.881632 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" event={"ID":"aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4","Type":"ContainerStarted","Data":"d2d360aef16da670a2b1e56688d17cb60a6fd43602c2cfbea20ec683e2ba159c"} Dec 08 17:54:16 crc kubenswrapper[5112]: I1208 17:54:16.806156 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-tj4k9" Dec 08 17:54:17 crc kubenswrapper[5112]: I1208 17:54:17.240672 5112 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:54:17 crc kubenswrapper[5112]: I1208 17:54:17.301015 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:54:19 crc kubenswrapper[5112]: I1208 17:54:19.773723 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh"] Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.579727 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh"] Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.579777 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dzvnm"] Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.580006 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh" Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.580175 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dzvnm" podUID="c402130e-f913-4588-9b7c-862415a55ca3" containerName="registry-server" containerID="cri-o://d842472fbe86dc5359c3ed4cf09f1b8c2c37ef245bc5bb3301d8c56f143f9577" gracePeriod=2 Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.584264 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.584641 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-mwkrj\"" Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.584860 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.635869 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f011ab47-a4b5-4095-b09c-dcf91ae8ee28-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-vzxfh\" (UID: \"f011ab47-a4b5-4095-b09c-dcf91ae8ee28\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh" Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.636011 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsgc9\" (UniqueName: \"kubernetes.io/projected/f011ab47-a4b5-4095-b09c-dcf91ae8ee28-kube-api-access-tsgc9\") pod \"cert-manager-operator-controller-manager-64c74584c4-vzxfh\" (UID: \"f011ab47-a4b5-4095-b09c-dcf91ae8ee28\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh" Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.738227 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tsgc9\" (UniqueName: \"kubernetes.io/projected/f011ab47-a4b5-4095-b09c-dcf91ae8ee28-kube-api-access-tsgc9\") pod \"cert-manager-operator-controller-manager-64c74584c4-vzxfh\" (UID: \"f011ab47-a4b5-4095-b09c-dcf91ae8ee28\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh" Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.738312 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f011ab47-a4b5-4095-b09c-dcf91ae8ee28-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-vzxfh\" (UID: \"f011ab47-a4b5-4095-b09c-dcf91ae8ee28\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh" Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.738798 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f011ab47-a4b5-4095-b09c-dcf91ae8ee28-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-vzxfh\" (UID: \"f011ab47-a4b5-4095-b09c-dcf91ae8ee28\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh" Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.759774 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsgc9\" (UniqueName: \"kubernetes.io/projected/f011ab47-a4b5-4095-b09c-dcf91ae8ee28-kube-api-access-tsgc9\") pod \"cert-manager-operator-controller-manager-64c74584c4-vzxfh\" (UID: \"f011ab47-a4b5-4095-b09c-dcf91ae8ee28\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh" Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.905012 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh" Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.916726 5112 generic.go:358] "Generic (PLEG): container finished" podID="c402130e-f913-4588-9b7c-862415a55ca3" containerID="d842472fbe86dc5359c3ed4cf09f1b8c2c37ef245bc5bb3301d8c56f143f9577" exitCode=0 Dec 08 17:54:20 crc kubenswrapper[5112]: I1208 17:54:20.916838 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzvnm" event={"ID":"c402130e-f913-4588-9b7c-862415a55ca3","Type":"ContainerDied","Data":"d842472fbe86dc5359c3ed4cf09f1b8c2c37ef245bc5bb3301d8c56f143f9577"} Dec 08 17:54:26 crc kubenswrapper[5112]: I1208 17:54:26.798741 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:54:26 crc kubenswrapper[5112]: I1208 17:54:26.847493 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c402130e-f913-4588-9b7c-862415a55ca3-catalog-content\") pod \"c402130e-f913-4588-9b7c-862415a55ca3\" (UID: \"c402130e-f913-4588-9b7c-862415a55ca3\") " Dec 08 17:54:26 crc kubenswrapper[5112]: I1208 17:54:26.847893 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58dqb\" (UniqueName: \"kubernetes.io/projected/c402130e-f913-4588-9b7c-862415a55ca3-kube-api-access-58dqb\") pod \"c402130e-f913-4588-9b7c-862415a55ca3\" (UID: \"c402130e-f913-4588-9b7c-862415a55ca3\") " Dec 08 17:54:26 crc kubenswrapper[5112]: I1208 17:54:26.848008 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c402130e-f913-4588-9b7c-862415a55ca3-utilities\") pod \"c402130e-f913-4588-9b7c-862415a55ca3\" (UID: \"c402130e-f913-4588-9b7c-862415a55ca3\") " Dec 08 17:54:26 crc kubenswrapper[5112]: I1208 17:54:26.849264 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c402130e-f913-4588-9b7c-862415a55ca3-utilities" (OuterVolumeSpecName: "utilities") pod "c402130e-f913-4588-9b7c-862415a55ca3" (UID: "c402130e-f913-4588-9b7c-862415a55ca3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:26 crc kubenswrapper[5112]: I1208 17:54:26.906714 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c402130e-f913-4588-9b7c-862415a55ca3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c402130e-f913-4588-9b7c-862415a55ca3" (UID: "c402130e-f913-4588-9b7c-862415a55ca3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:26 crc kubenswrapper[5112]: I1208 17:54:26.949618 5112 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c402130e-f913-4588-9b7c-862415a55ca3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:26 crc kubenswrapper[5112]: I1208 17:54:26.949663 5112 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c402130e-f913-4588-9b7c-862415a55ca3-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:26 crc kubenswrapper[5112]: I1208 17:54:26.966256 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzvnm" Dec 08 17:54:26 crc kubenswrapper[5112]: I1208 17:54:26.966244 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzvnm" event={"ID":"c402130e-f913-4588-9b7c-862415a55ca3","Type":"ContainerDied","Data":"9ef5fd80a1768f80225e4401d422b8c19db6a0eeabc77a0ff8b0e37ac67614e8"} Dec 08 17:54:26 crc kubenswrapper[5112]: I1208 17:54:26.966436 5112 scope.go:117] "RemoveContainer" containerID="d842472fbe86dc5359c3ed4cf09f1b8c2c37ef245bc5bb3301d8c56f143f9577" Dec 08 17:54:26 crc kubenswrapper[5112]: I1208 17:54:26.973059 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c402130e-f913-4588-9b7c-862415a55ca3-kube-api-access-58dqb" (OuterVolumeSpecName: "kube-api-access-58dqb") pod "c402130e-f913-4588-9b7c-862415a55ca3" (UID: "c402130e-f913-4588-9b7c-862415a55ca3"). InnerVolumeSpecName "kube-api-access-58dqb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:27 crc kubenswrapper[5112]: I1208 17:54:27.027274 5112 scope.go:117] "RemoveContainer" containerID="18b90c9970a955ba23cf581f487af50b10ef503e55817760c7f867179b4d3af7" Dec 08 17:54:27 crc kubenswrapper[5112]: I1208 17:54:27.077875 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-58dqb\" (UniqueName: \"kubernetes.io/projected/c402130e-f913-4588-9b7c-862415a55ca3-kube-api-access-58dqb\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:27 crc kubenswrapper[5112]: I1208 17:54:27.143033 5112 scope.go:117] "RemoveContainer" containerID="89097155fe8751777110f21cd3ff71ae592022e68864f7bff65971c6d3a21081" Dec 08 17:54:27 crc kubenswrapper[5112]: I1208 17:54:27.315933 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dzvnm"] Dec 08 17:54:27 crc kubenswrapper[5112]: I1208 17:54:27.325977 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dzvnm"] Dec 08 17:54:27 crc kubenswrapper[5112]: I1208 17:54:27.344283 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh"] Dec 08 17:54:27 crc kubenswrapper[5112]: W1208 17:54:27.350235 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf011ab47_a4b5_4095_b09c_dcf91ae8ee28.slice/crio-1de9f2c1261b2095e4fd0d454d92eef16104824148fdaea20aab59f9beaae6c0 WatchSource:0}: Error finding container 1de9f2c1261b2095e4fd0d454d92eef16104824148fdaea20aab59f9beaae6c0: Status 404 returned error can't find the container with id 1de9f2c1261b2095e4fd0d454d92eef16104824148fdaea20aab59f9beaae6c0 Dec 08 17:54:27 crc kubenswrapper[5112]: I1208 17:54:27.973289 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh" event={"ID":"f011ab47-a4b5-4095-b09c-dcf91ae8ee28","Type":"ContainerStarted","Data":"1de9f2c1261b2095e4fd0d454d92eef16104824148fdaea20aab59f9beaae6c0"} Dec 08 17:54:27 crc kubenswrapper[5112]: I1208 17:54:27.975530 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" event={"ID":"aaea405e-bb7c-46ea-89b6-e2eb1c9c5ee4","Type":"ContainerStarted","Data":"05a5d245107db645bd08f3698efd23039b5ab9c7083bb8b4400aca22a763c70d"} Dec 08 17:54:27 crc kubenswrapper[5112]: I1208 17:54:27.977164 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-8hxcm" event={"ID":"fe1f29a7-d254-48b4-b6ea-4076ded777ba","Type":"ContainerStarted","Data":"06dec83705949eac0d0b6bc399522f790aed5fb51600e2e6caad2f5564a2ec25"} Dec 08 17:54:28 crc kubenswrapper[5112]: I1208 17:54:28.008637 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-6d678f5bbf-5xfzk" podStartSLOduration=2.421992575 podStartE2EDuration="14.008617307s" podCreationTimestamp="2025-12-08 17:54:14 +0000 UTC" firstStartedPulling="2025-12-08 17:54:15.164422542 +0000 UTC m=+832.173971243" lastFinishedPulling="2025-12-08 17:54:26.751047274 +0000 UTC m=+843.760595975" observedRunningTime="2025-12-08 17:54:28.005900453 +0000 UTC m=+845.015449164" watchObservedRunningTime="2025-12-08 17:54:28.008617307 +0000 UTC m=+845.018166008" Dec 08 17:54:28 crc kubenswrapper[5112]: I1208 17:54:28.040209 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-8hxcm" podStartSLOduration=3.183250429 podStartE2EDuration="17.040184088s" podCreationTimestamp="2025-12-08 17:54:11 +0000 UTC" firstStartedPulling="2025-12-08 17:54:12.906343238 +0000 UTC m=+829.915891939" lastFinishedPulling="2025-12-08 17:54:26.763276897 +0000 UTC m=+843.772825598" observedRunningTime="2025-12-08 17:54:28.038078041 +0000 UTC m=+845.047626772" watchObservedRunningTime="2025-12-08 17:54:28.040184088 +0000 UTC m=+845.049732809" Dec 08 17:54:29 crc kubenswrapper[5112]: I1208 17:54:29.323455 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c402130e-f913-4588-9b7c-862415a55ca3" path="/var/lib/kubelet/pods/c402130e-f913-4588-9b7c-862415a55ca3/volumes" Dec 08 17:54:30 crc kubenswrapper[5112]: I1208 17:54:30.996569 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh" event={"ID":"f011ab47-a4b5-4095-b09c-dcf91ae8ee28","Type":"ContainerStarted","Data":"fbc80463cdb03bd6e618fcd1a7973fec544960dec15e80eff6fda2184ca29915"} Dec 08 17:54:31 crc kubenswrapper[5112]: I1208 17:54:31.021382 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-vzxfh" podStartSLOduration=8.659360221 podStartE2EDuration="12.021359017s" podCreationTimestamp="2025-12-08 17:54:19 +0000 UTC" firstStartedPulling="2025-12-08 17:54:27.352818869 +0000 UTC m=+844.362367570" lastFinishedPulling="2025-12-08 17:54:30.714817665 +0000 UTC m=+847.724366366" observedRunningTime="2025-12-08 17:54:31.014312564 +0000 UTC m=+848.023861255" watchObservedRunningTime="2025-12-08 17:54:31.021359017 +0000 UTC m=+848.030907718" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.697672 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.698551 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c402130e-f913-4588-9b7c-862415a55ca3" containerName="extract-content" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.698568 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="c402130e-f913-4588-9b7c-862415a55ca3" containerName="extract-content" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.698593 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c402130e-f913-4588-9b7c-862415a55ca3" containerName="registry-server" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.698599 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="c402130e-f913-4588-9b7c-862415a55ca3" containerName="registry-server" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.698609 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c402130e-f913-4588-9b7c-862415a55ca3" containerName="extract-utilities" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.698619 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="c402130e-f913-4588-9b7c-862415a55ca3" containerName="extract-utilities" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.698710 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="c402130e-f913-4588-9b7c-862415a55ca3" containerName="registry-server" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.707429 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.709358 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.709502 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.709555 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.710405 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.710653 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.710772 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-tk8st\"" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.712535 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.712640 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.712832 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.718045 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.795036 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-6t57d"] Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.799239 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6t57d" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.800779 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-dqpbd\"" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.800893 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.801582 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.806441 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-6t57d"] Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.837830 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.837899 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.837930 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.838038 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.838099 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.838167 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.838250 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.838286 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.838316 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.838341 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.838426 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.838490 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.838510 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.838528 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.838625 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.939811 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.939867 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495b8a71-df81-49dc-8823-bbee4350dfc0-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-6t57d\" (UID: \"495b8a71-df81-49dc-8823-bbee4350dfc0\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-6t57d" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.939898 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.939937 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.939989 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940025 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940054 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940125 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940342 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940426 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940472 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940507 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940629 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940671 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940732 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vj2t\" (UniqueName: \"kubernetes.io/projected/495b8a71-df81-49dc-8823-bbee4350dfc0-kube-api-access-5vj2t\") pod \"cert-manager-webhook-7894b5b9b4-6t57d\" (UID: \"495b8a71-df81-49dc-8823-bbee4350dfc0\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-6t57d" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940768 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940823 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940867 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940941 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.940998 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.941489 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.941952 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.942051 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.942071 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.942614 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.947091 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.947638 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.957716 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.958229 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.958524 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.959733 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:34 crc kubenswrapper[5112]: I1208 17:54:34.959937 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.037189 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.041774 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5vj2t\" (UniqueName: \"kubernetes.io/projected/495b8a71-df81-49dc-8823-bbee4350dfc0-kube-api-access-5vj2t\") pod \"cert-manager-webhook-7894b5b9b4-6t57d\" (UID: \"495b8a71-df81-49dc-8823-bbee4350dfc0\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-6t57d" Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.041837 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495b8a71-df81-49dc-8823-bbee4350dfc0-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-6t57d\" (UID: \"495b8a71-df81-49dc-8823-bbee4350dfc0\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-6t57d" Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.067033 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vj2t\" (UniqueName: \"kubernetes.io/projected/495b8a71-df81-49dc-8823-bbee4350dfc0-kube-api-access-5vj2t\") pod \"cert-manager-webhook-7894b5b9b4-6t57d\" (UID: \"495b8a71-df81-49dc-8823-bbee4350dfc0\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-6t57d" Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.070845 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495b8a71-df81-49dc-8823-bbee4350dfc0-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-6t57d\" (UID: \"495b8a71-df81-49dc-8823-bbee4350dfc0\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-6t57d" Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.111985 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6t57d" Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.374449 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.452747 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-6t57d"] Dec 08 17:54:35 crc kubenswrapper[5112]: W1208 17:54:35.455152 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod495b8a71_df81_49dc_8823_bbee4350dfc0.slice/crio-e19cedc9d451fd7422ebe25de921aca98064299be70c8191413ace3a91738439 WatchSource:0}: Error finding container e19cedc9d451fd7422ebe25de921aca98064299be70c8191413ace3a91738439: Status 404 returned error can't find the container with id e19cedc9d451fd7422ebe25de921aca98064299be70c8191413ace3a91738439 Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.943716 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw"] Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.950132 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw" Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.954625 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-pdrpv\"" Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.954726 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wvbz\" (UniqueName: \"kubernetes.io/projected/a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9-kube-api-access-8wvbz\") pod \"cert-manager-cainjector-7dbf76d5c8-x2rmw\" (UID: \"a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw" Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.954768 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-x2rmw\" (UID: \"a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw" Dec 08 17:54:35 crc kubenswrapper[5112]: I1208 17:54:35.959407 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw"] Dec 08 17:54:36 crc kubenswrapper[5112]: I1208 17:54:36.026914 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21","Type":"ContainerStarted","Data":"dc6896be2ddca34d9fbe10cf13d0cc04187b1ea6b59d707c147b574fe7b33c8f"} Dec 08 17:54:36 crc kubenswrapper[5112]: I1208 17:54:36.027953 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6t57d" event={"ID":"495b8a71-df81-49dc-8823-bbee4350dfc0","Type":"ContainerStarted","Data":"e19cedc9d451fd7422ebe25de921aca98064299be70c8191413ace3a91738439"} Dec 08 17:54:36 crc kubenswrapper[5112]: I1208 17:54:36.056448 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8wvbz\" (UniqueName: \"kubernetes.io/projected/a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9-kube-api-access-8wvbz\") pod \"cert-manager-cainjector-7dbf76d5c8-x2rmw\" (UID: \"a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw" Dec 08 17:54:36 crc kubenswrapper[5112]: I1208 17:54:36.056531 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-x2rmw\" (UID: \"a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw" Dec 08 17:54:36 crc kubenswrapper[5112]: I1208 17:54:36.081108 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-x2rmw\" (UID: \"a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw" Dec 08 17:54:36 crc kubenswrapper[5112]: I1208 17:54:36.081201 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wvbz\" (UniqueName: \"kubernetes.io/projected/a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9-kube-api-access-8wvbz\") pod \"cert-manager-cainjector-7dbf76d5c8-x2rmw\" (UID: \"a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw" Dec 08 17:54:36 crc kubenswrapper[5112]: I1208 17:54:36.286349 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw" Dec 08 17:54:36 crc kubenswrapper[5112]: I1208 17:54:36.525769 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw"] Dec 08 17:54:36 crc kubenswrapper[5112]: W1208 17:54:36.536448 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1ec93f6_80fd_49ea_ad57_6dbc77c4a5f9.slice/crio-7ed4328dfb2ed651e5940312e0d420e075a94e15e13cb9bbbf8addf38aa09f36 WatchSource:0}: Error finding container 7ed4328dfb2ed651e5940312e0d420e075a94e15e13cb9bbbf8addf38aa09f36: Status 404 returned error can't find the container with id 7ed4328dfb2ed651e5940312e0d420e075a94e15e13cb9bbbf8addf38aa09f36 Dec 08 17:54:37 crc kubenswrapper[5112]: I1208 17:54:37.039053 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw" event={"ID":"a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9","Type":"ContainerStarted","Data":"7ed4328dfb2ed651e5940312e0d420e075a94e15e13cb9bbbf8addf38aa09f36"} Dec 08 17:54:41 crc kubenswrapper[5112]: I1208 17:54:41.707184 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:54:41 crc kubenswrapper[5112]: I1208 17:54:41.707589 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:54:41 crc kubenswrapper[5112]: I1208 17:54:41.707658 5112 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:54:41 crc kubenswrapper[5112]: I1208 17:54:41.708783 5112 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"240b1d29409d9f35aedfce10e5ba170d923c2b90de94cecbc02a5feba56821b7"} pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:54:41 crc kubenswrapper[5112]: I1208 17:54:41.708891 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" containerID="cri-o://240b1d29409d9f35aedfce10e5ba170d923c2b90de94cecbc02a5feba56821b7" gracePeriod=600 Dec 08 17:54:42 crc kubenswrapper[5112]: I1208 17:54:42.192514 5112 generic.go:358] "Generic (PLEG): container finished" podID="95e46da0-94bb-4d22-804b-b3018984cdac" containerID="240b1d29409d9f35aedfce10e5ba170d923c2b90de94cecbc02a5feba56821b7" exitCode=0 Dec 08 17:54:42 crc kubenswrapper[5112]: I1208 17:54:42.192591 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" event={"ID":"95e46da0-94bb-4d22-804b-b3018984cdac","Type":"ContainerDied","Data":"240b1d29409d9f35aedfce10e5ba170d923c2b90de94cecbc02a5feba56821b7"} Dec 08 17:54:42 crc kubenswrapper[5112]: I1208 17:54:42.193053 5112 scope.go:117] "RemoveContainer" containerID="77f5ad0ee85d883c620f8b160d1de9715081e996ed78a3f7e153e91f47fae509" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.316016 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.331206 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.331679 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.334900 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-c6q5x\"" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.334976 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.335006 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.335273 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.423752 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/7aa1750c-350f-4e7a-aae9-2e877bce69cb-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.423837 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.423886 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.423924 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7aa1750c-350f-4e7a-aae9-2e877bce69cb-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.424018 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.424155 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.424309 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md25b\" (UniqueName: \"kubernetes.io/projected/7aa1750c-350f-4e7a-aae9-2e877bce69cb-kube-api-access-md25b\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.424379 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.424460 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7aa1750c-350f-4e7a-aae9-2e877bce69cb-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.424482 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/7aa1750c-350f-4e7a-aae9-2e877bce69cb-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.424532 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.424586 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.525890 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.525952 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.525977 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7aa1750c-350f-4e7a-aae9-2e877bce69cb-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526036 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7aa1750c-350f-4e7a-aae9-2e877bce69cb-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526066 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526142 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526230 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-md25b\" (UniqueName: \"kubernetes.io/projected/7aa1750c-350f-4e7a-aae9-2e877bce69cb-kube-api-access-md25b\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526266 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526338 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7aa1750c-350f-4e7a-aae9-2e877bce69cb-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526356 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/7aa1750c-350f-4e7a-aae9-2e877bce69cb-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526416 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526415 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526447 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526488 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526567 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/7aa1750c-350f-4e7a-aae9-2e877bce69cb-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526683 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526706 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526842 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.526858 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7aa1750c-350f-4e7a-aae9-2e877bce69cb-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.527207 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.527769 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.532543 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/7aa1750c-350f-4e7a-aae9-2e877bce69cb-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.535445 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/7aa1750c-350f-4e7a-aae9-2e877bce69cb-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.550823 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-md25b\" (UniqueName: \"kubernetes.io/projected/7aa1750c-350f-4e7a-aae9-2e877bce69cb-kube-api-access-md25b\") pod \"service-telemetry-operator-1-build\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:44 crc kubenswrapper[5112]: I1208 17:54:44.667521 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:54:45 crc kubenswrapper[5112]: I1208 17:54:45.522318 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-q8wm5"] Dec 08 17:54:46 crc kubenswrapper[5112]: I1208 17:54:46.038231 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-q8wm5"] Dec 08 17:54:46 crc kubenswrapper[5112]: I1208 17:54:46.038373 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-q8wm5" Dec 08 17:54:46 crc kubenswrapper[5112]: I1208 17:54:46.041404 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-rgzhw\"" Dec 08 17:54:46 crc kubenswrapper[5112]: I1208 17:54:46.150843 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr6nh\" (UniqueName: \"kubernetes.io/projected/f5b85137-00bb-40de-bc66-873e578b00dc-kube-api-access-zr6nh\") pod \"cert-manager-858d87f86b-q8wm5\" (UID: \"f5b85137-00bb-40de-bc66-873e578b00dc\") " pod="cert-manager/cert-manager-858d87f86b-q8wm5" Dec 08 17:54:46 crc kubenswrapper[5112]: I1208 17:54:46.150933 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5b85137-00bb-40de-bc66-873e578b00dc-bound-sa-token\") pod \"cert-manager-858d87f86b-q8wm5\" (UID: \"f5b85137-00bb-40de-bc66-873e578b00dc\") " pod="cert-manager/cert-manager-858d87f86b-q8wm5" Dec 08 17:54:46 crc kubenswrapper[5112]: I1208 17:54:46.251883 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zr6nh\" (UniqueName: \"kubernetes.io/projected/f5b85137-00bb-40de-bc66-873e578b00dc-kube-api-access-zr6nh\") pod \"cert-manager-858d87f86b-q8wm5\" (UID: \"f5b85137-00bb-40de-bc66-873e578b00dc\") " pod="cert-manager/cert-manager-858d87f86b-q8wm5" Dec 08 17:54:46 crc kubenswrapper[5112]: I1208 17:54:46.251960 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5b85137-00bb-40de-bc66-873e578b00dc-bound-sa-token\") pod \"cert-manager-858d87f86b-q8wm5\" (UID: \"f5b85137-00bb-40de-bc66-873e578b00dc\") " pod="cert-manager/cert-manager-858d87f86b-q8wm5" Dec 08 17:54:46 crc kubenswrapper[5112]: I1208 17:54:46.275536 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr6nh\" (UniqueName: \"kubernetes.io/projected/f5b85137-00bb-40de-bc66-873e578b00dc-kube-api-access-zr6nh\") pod \"cert-manager-858d87f86b-q8wm5\" (UID: \"f5b85137-00bb-40de-bc66-873e578b00dc\") " pod="cert-manager/cert-manager-858d87f86b-q8wm5" Dec 08 17:54:46 crc kubenswrapper[5112]: I1208 17:54:46.275948 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f5b85137-00bb-40de-bc66-873e578b00dc-bound-sa-token\") pod \"cert-manager-858d87f86b-q8wm5\" (UID: \"f5b85137-00bb-40de-bc66-873e578b00dc\") " pod="cert-manager/cert-manager-858d87f86b-q8wm5" Dec 08 17:54:46 crc kubenswrapper[5112]: I1208 17:54:46.368405 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-q8wm5" Dec 08 17:54:54 crc kubenswrapper[5112]: I1208 17:54:54.893215 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 17:54:56 crc kubenswrapper[5112]: I1208 17:54:56.941305 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.194897 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.195134 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.198003 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.198462 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.199200 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.427891 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/3f65d72f-81b2-44b0-b474-fac6735fcb1c-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.427950 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.427973 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtvhb\" (UniqueName: \"kubernetes.io/projected/3f65d72f-81b2-44b0-b474-fac6735fcb1c-kube-api-access-jtvhb\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.428019 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/3f65d72f-81b2-44b0-b474-fac6735fcb1c-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.428042 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.428070 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.428136 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.428178 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3f65d72f-81b2-44b0-b474-fac6735fcb1c-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.428735 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3f65d72f-81b2-44b0-b474-fac6735fcb1c-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.428927 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.429156 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.429313 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.530412 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.530845 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jtvhb\" (UniqueName: \"kubernetes.io/projected/3f65d72f-81b2-44b0-b474-fac6735fcb1c-kube-api-access-jtvhb\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.530987 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/3f65d72f-81b2-44b0-b474-fac6735fcb1c-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.531116 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.531236 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.531341 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.531450 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3f65d72f-81b2-44b0-b474-fac6735fcb1c-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.531553 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3f65d72f-81b2-44b0-b474-fac6735fcb1c-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.531470 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.532140 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.532210 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.532329 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.533469 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3f65d72f-81b2-44b0-b474-fac6735fcb1c-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.534305 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.534417 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.534532 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.534536 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.534660 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/3f65d72f-81b2-44b0-b474-fac6735fcb1c-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.533649 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3f65d72f-81b2-44b0-b474-fac6735fcb1c-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.535724 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.545274 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.550991 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtvhb\" (UniqueName: \"kubernetes.io/projected/3f65d72f-81b2-44b0-b474-fac6735fcb1c-kube-api-access-jtvhb\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.556499 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/3f65d72f-81b2-44b0-b474-fac6735fcb1c-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.558025 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/3f65d72f-81b2-44b0-b474-fac6735fcb1c-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:54:57 crc kubenswrapper[5112]: I1208 17:54:57.649152 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:03 crc kubenswrapper[5112]: I1208 17:55:03.605049 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 17:55:03 crc kubenswrapper[5112]: I1208 17:55:03.761694 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-q8wm5"] Dec 08 17:55:03 crc kubenswrapper[5112]: W1208 17:55:03.775012 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5b85137_00bb_40de_bc66_873e578b00dc.slice/crio-752eec85093468749f0d04d1115bd3fc928d0447226b56710184784c271c9ebe WatchSource:0}: Error finding container 752eec85093468749f0d04d1115bd3fc928d0447226b56710184784c271c9ebe: Status 404 returned error can't find the container with id 752eec85093468749f0d04d1115bd3fc928d0447226b56710184784c271c9ebe Dec 08 17:55:03 crc kubenswrapper[5112]: I1208 17:55:03.840299 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 17:55:03 crc kubenswrapper[5112]: W1208 17:55:03.858155 5112 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f65d72f_81b2_44b0_b474_fac6735fcb1c.slice/crio-6e75cb578328d976f25c12847444ccc9b56b20a74f44c19e27abe6c54c1a4e0a WatchSource:0}: Error finding container 6e75cb578328d976f25c12847444ccc9b56b20a74f44c19e27abe6c54c1a4e0a: Status 404 returned error can't find the container with id 6e75cb578328d976f25c12847444ccc9b56b20a74f44c19e27abe6c54c1a4e0a Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.594213 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6t57d" event={"ID":"495b8a71-df81-49dc-8823-bbee4350dfc0","Type":"ContainerStarted","Data":"b907ea1e900c132c7abb75674bb0105ee1520b2fade6a10010d191cecce93f0b"} Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.600262 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-q8wm5" event={"ID":"f5b85137-00bb-40de-bc66-873e578b00dc","Type":"ContainerStarted","Data":"55300f0886c73d27352bda4a5537899682dec4984ee09c2d193f380c2ce0d383"} Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.600688 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-q8wm5" event={"ID":"f5b85137-00bb-40de-bc66-873e578b00dc","Type":"ContainerStarted","Data":"752eec85093468749f0d04d1115bd3fc928d0447226b56710184784c271c9ebe"} Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.604393 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"3f65d72f-81b2-44b0-b474-fac6735fcb1c","Type":"ContainerStarted","Data":"6e75cb578328d976f25c12847444ccc9b56b20a74f44c19e27abe6c54c1a4e0a"} Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.604728 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6t57d" Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.604855 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"7aa1750c-350f-4e7a-aae9-2e877bce69cb","Type":"ContainerStarted","Data":"717534ed4dc7b9c31c0c47c18f559e4b051ce2397e79168e4eb540f24aca31a8"} Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.610045 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" event={"ID":"95e46da0-94bb-4d22-804b-b3018984cdac","Type":"ContainerStarted","Data":"7df939746405f1e31b0b6c600b41c2ef2f32d550ab3e537995db11242c570dc3"} Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.624167 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21","Type":"ContainerStarted","Data":"de873966641dd4fd019cd13dc783dd367f9850ef54ec91c397a65e70c92eb7e0"} Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.624940 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6t57d" podStartSLOduration=2.618954714 podStartE2EDuration="30.624923649s" podCreationTimestamp="2025-12-08 17:54:34 +0000 UTC" firstStartedPulling="2025-12-08 17:54:35.457520383 +0000 UTC m=+852.467069084" lastFinishedPulling="2025-12-08 17:55:03.463489318 +0000 UTC m=+880.473038019" observedRunningTime="2025-12-08 17:55:04.623548591 +0000 UTC m=+881.633097312" watchObservedRunningTime="2025-12-08 17:55:04.624923649 +0000 UTC m=+881.634472340" Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.629409 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw" event={"ID":"a1ec93f6-80fd-49ea-ad57-6dbc77c4a5f9","Type":"ContainerStarted","Data":"7cc12b004d6c0e34f1783d50028a8a2f6480cb410416e8fa590a6c9be65b0498"} Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.683211 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-q8wm5" podStartSLOduration=19.683195458 podStartE2EDuration="19.683195458s" podCreationTimestamp="2025-12-08 17:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:55:04.659384869 +0000 UTC m=+881.668933560" watchObservedRunningTime="2025-12-08 17:55:04.683195458 +0000 UTC m=+881.692744149" Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.684425 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-x2rmw" podStartSLOduration=3.007728187 podStartE2EDuration="29.684419592s" podCreationTimestamp="2025-12-08 17:54:35 +0000 UTC" firstStartedPulling="2025-12-08 17:54:36.538557901 +0000 UTC m=+853.548106602" lastFinishedPulling="2025-12-08 17:55:03.215249306 +0000 UTC m=+880.224798007" observedRunningTime="2025-12-08 17:55:04.677601376 +0000 UTC m=+881.687150077" watchObservedRunningTime="2025-12-08 17:55:04.684419592 +0000 UTC m=+881.693968293" Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.815606 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:55:04 crc kubenswrapper[5112]: I1208 17:55:04.850674 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:55:06 crc kubenswrapper[5112]: I1208 17:55:06.648731 5112 generic.go:358] "Generic (PLEG): container finished" podID="299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21" containerID="de873966641dd4fd019cd13dc783dd367f9850ef54ec91c397a65e70c92eb7e0" exitCode=0 Dec 08 17:55:06 crc kubenswrapper[5112]: I1208 17:55:06.648851 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21","Type":"ContainerDied","Data":"de873966641dd4fd019cd13dc783dd367f9850ef54ec91c397a65e70c92eb7e0"} Dec 08 17:55:07 crc kubenswrapper[5112]: I1208 17:55:07.657044 5112 generic.go:358] "Generic (PLEG): container finished" podID="299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21" containerID="d6dd4fabf3989513aaccc366fd5b03598bdbc68fe71ec6cc709667aa02c57e6f" exitCode=0 Dec 08 17:55:07 crc kubenswrapper[5112]: I1208 17:55:07.657135 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21","Type":"ContainerDied","Data":"d6dd4fabf3989513aaccc366fd5b03598bdbc68fe71ec6cc709667aa02c57e6f"} Dec 08 17:55:08 crc kubenswrapper[5112]: I1208 17:55:08.671239 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21","Type":"ContainerStarted","Data":"80012c987ae0c5371f3d99cd19ccc8e1df954d035b6a2aa872b0758de853e12e"} Dec 08 17:55:08 crc kubenswrapper[5112]: I1208 17:55:08.671602 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:08 crc kubenswrapper[5112]: I1208 17:55:08.705245 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.183452544 podStartE2EDuration="34.705226779s" podCreationTimestamp="2025-12-08 17:54:34 +0000 UTC" firstStartedPulling="2025-12-08 17:54:35.393430375 +0000 UTC m=+852.402979076" lastFinishedPulling="2025-12-08 17:55:03.91520461 +0000 UTC m=+880.924753311" observedRunningTime="2025-12-08 17:55:08.700424478 +0000 UTC m=+885.709973199" watchObservedRunningTime="2025-12-08 17:55:08.705226779 +0000 UTC m=+885.714775480" Dec 08 17:55:11 crc kubenswrapper[5112]: I1208 17:55:11.652691 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-6t57d" Dec 08 17:55:12 crc kubenswrapper[5112]: I1208 17:55:12.719373 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"3f65d72f-81b2-44b0-b474-fac6735fcb1c","Type":"ContainerStarted","Data":"906e84e8233cfc26d5501b806251ed4765da78008510ae7be1edb438ce9edcd8"} Dec 08 17:55:12 crc kubenswrapper[5112]: I1208 17:55:12.721336 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"7aa1750c-350f-4e7a-aae9-2e877bce69cb","Type":"ContainerStarted","Data":"181cf3d34427436c66f5ba463dff09ee027eea973978c303caf6ee282b9ff274"} Dec 08 17:55:12 crc kubenswrapper[5112]: I1208 17:55:12.721434 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="7aa1750c-350f-4e7a-aae9-2e877bce69cb" containerName="manage-dockerfile" containerID="cri-o://181cf3d34427436c66f5ba463dff09ee027eea973978c303caf6ee282b9ff274" gracePeriod=30 Dec 08 17:55:12 crc kubenswrapper[5112]: I1208 17:55:12.795457 5112 ???:1] "http: TLS handshake error from 192.168.126.11:55630: no serving certificate available for the kubelet" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.133161 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_7aa1750c-350f-4e7a-aae9-2e877bce69cb/manage-dockerfile/0.log" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.133477 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.266264 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-container-storage-root\") pod \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.266374 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7aa1750c-350f-4e7a-aae9-2e877bce69cb-buildcachedir\") pod \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.266567 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-container-storage-run\") pod \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.266627 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-system-configs\") pod \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.266624 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7aa1750c-350f-4e7a-aae9-2e877bce69cb-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7aa1750c-350f-4e7a-aae9-2e877bce69cb" (UID: "7aa1750c-350f-4e7a-aae9-2e877bce69cb"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.266736 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-ca-bundles\") pod \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.266780 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-blob-cache\") pod \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.266816 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7aa1750c-350f-4e7a-aae9-2e877bce69cb-node-pullsecrets\") pod \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.266862 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/7aa1750c-350f-4e7a-aae9-2e877bce69cb-builder-dockercfg-c6q5x-push\") pod \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.266895 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/7aa1750c-350f-4e7a-aae9-2e877bce69cb-builder-dockercfg-c6q5x-pull\") pod \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.266925 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-proxy-ca-bundles\") pod \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.266974 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md25b\" (UniqueName: \"kubernetes.io/projected/7aa1750c-350f-4e7a-aae9-2e877bce69cb-kube-api-access-md25b\") pod \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.267031 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-buildworkdir\") pod \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\" (UID: \"7aa1750c-350f-4e7a-aae9-2e877bce69cb\") " Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.267184 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7aa1750c-350f-4e7a-aae9-2e877bce69cb" (UID: "7aa1750c-350f-4e7a-aae9-2e877bce69cb"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.267458 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7aa1750c-350f-4e7a-aae9-2e877bce69cb" (UID: "7aa1750c-350f-4e7a-aae9-2e877bce69cb"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.267638 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7aa1750c-350f-4e7a-aae9-2e877bce69cb-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7aa1750c-350f-4e7a-aae9-2e877bce69cb" (UID: "7aa1750c-350f-4e7a-aae9-2e877bce69cb"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.267906 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7aa1750c-350f-4e7a-aae9-2e877bce69cb" (UID: "7aa1750c-350f-4e7a-aae9-2e877bce69cb"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.268039 5112 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7aa1750c-350f-4e7a-aae9-2e877bce69cb-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.268063 5112 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.268117 5112 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7aa1750c-350f-4e7a-aae9-2e877bce69cb-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.268129 5112 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.268380 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7aa1750c-350f-4e7a-aae9-2e877bce69cb" (UID: "7aa1750c-350f-4e7a-aae9-2e877bce69cb"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.269187 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7aa1750c-350f-4e7a-aae9-2e877bce69cb" (UID: "7aa1750c-350f-4e7a-aae9-2e877bce69cb"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.269292 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7aa1750c-350f-4e7a-aae9-2e877bce69cb" (UID: "7aa1750c-350f-4e7a-aae9-2e877bce69cb"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.269794 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7aa1750c-350f-4e7a-aae9-2e877bce69cb" (UID: "7aa1750c-350f-4e7a-aae9-2e877bce69cb"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.275057 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa1750c-350f-4e7a-aae9-2e877bce69cb-builder-dockercfg-c6q5x-pull" (OuterVolumeSpecName: "builder-dockercfg-c6q5x-pull") pod "7aa1750c-350f-4e7a-aae9-2e877bce69cb" (UID: "7aa1750c-350f-4e7a-aae9-2e877bce69cb"). InnerVolumeSpecName "builder-dockercfg-c6q5x-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.275178 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa1750c-350f-4e7a-aae9-2e877bce69cb-builder-dockercfg-c6q5x-push" (OuterVolumeSpecName: "builder-dockercfg-c6q5x-push") pod "7aa1750c-350f-4e7a-aae9-2e877bce69cb" (UID: "7aa1750c-350f-4e7a-aae9-2e877bce69cb"). InnerVolumeSpecName "builder-dockercfg-c6q5x-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.275804 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa1750c-350f-4e7a-aae9-2e877bce69cb-kube-api-access-md25b" (OuterVolumeSpecName: "kube-api-access-md25b") pod "7aa1750c-350f-4e7a-aae9-2e877bce69cb" (UID: "7aa1750c-350f-4e7a-aae9-2e877bce69cb"). InnerVolumeSpecName "kube-api-access-md25b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.369546 5112 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.369585 5112 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.369595 5112 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/7aa1750c-350f-4e7a-aae9-2e877bce69cb-builder-dockercfg-c6q5x-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.369606 5112 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/7aa1750c-350f-4e7a-aae9-2e877bce69cb-builder-dockercfg-c6q5x-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.369615 5112 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.369625 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-md25b\" (UniqueName: \"kubernetes.io/projected/7aa1750c-350f-4e7a-aae9-2e877bce69cb-kube-api-access-md25b\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.369634 5112 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7aa1750c-350f-4e7a-aae9-2e877bce69cb-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.369642 5112 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7aa1750c-350f-4e7a-aae9-2e877bce69cb-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.729550 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_7aa1750c-350f-4e7a-aae9-2e877bce69cb/manage-dockerfile/0.log" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.729597 5112 generic.go:358] "Generic (PLEG): container finished" podID="7aa1750c-350f-4e7a-aae9-2e877bce69cb" containerID="181cf3d34427436c66f5ba463dff09ee027eea973978c303caf6ee282b9ff274" exitCode=2 Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.729655 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.729713 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"7aa1750c-350f-4e7a-aae9-2e877bce69cb","Type":"ContainerDied","Data":"181cf3d34427436c66f5ba463dff09ee027eea973978c303caf6ee282b9ff274"} Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.729793 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"7aa1750c-350f-4e7a-aae9-2e877bce69cb","Type":"ContainerDied","Data":"717534ed4dc7b9c31c0c47c18f559e4b051ce2397e79168e4eb540f24aca31a8"} Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.729816 5112 scope.go:117] "RemoveContainer" containerID="181cf3d34427436c66f5ba463dff09ee027eea973978c303caf6ee282b9ff274" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.756582 5112 scope.go:117] "RemoveContainer" containerID="181cf3d34427436c66f5ba463dff09ee027eea973978c303caf6ee282b9ff274" Dec 08 17:55:13 crc kubenswrapper[5112]: E1208 17:55:13.756788 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"181cf3d34427436c66f5ba463dff09ee027eea973978c303caf6ee282b9ff274\": container with ID starting with 181cf3d34427436c66f5ba463dff09ee027eea973978c303caf6ee282b9ff274 not found: ID does not exist" containerID="181cf3d34427436c66f5ba463dff09ee027eea973978c303caf6ee282b9ff274" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.756823 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181cf3d34427436c66f5ba463dff09ee027eea973978c303caf6ee282b9ff274"} err="failed to get container status \"181cf3d34427436c66f5ba463dff09ee027eea973978c303caf6ee282b9ff274\": rpc error: code = NotFound desc = could not find container \"181cf3d34427436c66f5ba463dff09ee027eea973978c303caf6ee282b9ff274\": container with ID starting with 181cf3d34427436c66f5ba463dff09ee027eea973978c303caf6ee282b9ff274 not found: ID does not exist" Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.759814 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.765494 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 17:55:13 crc kubenswrapper[5112]: I1208 17:55:13.829955 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 17:55:14 crc kubenswrapper[5112]: I1208 17:55:14.738211 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-2-build" podUID="3f65d72f-81b2-44b0-b474-fac6735fcb1c" containerName="git-clone" containerID="cri-o://906e84e8233cfc26d5501b806251ed4765da78008510ae7be1edb438ce9edcd8" gracePeriod=30 Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.181487 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_3f65d72f-81b2-44b0-b474-fac6735fcb1c/git-clone/0.log" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.181565 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.295035 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtvhb\" (UniqueName: \"kubernetes.io/projected/3f65d72f-81b2-44b0-b474-fac6735fcb1c-kube-api-access-jtvhb\") pod \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.295426 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-ca-bundles\") pod \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.295446 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-system-configs\") pod \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.295507 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-proxy-ca-bundles\") pod \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.295531 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/3f65d72f-81b2-44b0-b474-fac6735fcb1c-builder-dockercfg-c6q5x-pull\") pod \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.295656 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-buildworkdir\") pod \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.295686 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-container-storage-run\") pod \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.295732 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/3f65d72f-81b2-44b0-b474-fac6735fcb1c-builder-dockercfg-c6q5x-push\") pod \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.295852 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3f65d72f-81b2-44b0-b474-fac6735fcb1c-node-pullsecrets\") pod \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.295899 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-container-storage-root\") pod \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.295896 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "3f65d72f-81b2-44b0-b474-fac6735fcb1c" (UID: "3f65d72f-81b2-44b0-b474-fac6735fcb1c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.295950 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f65d72f-81b2-44b0-b474-fac6735fcb1c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3f65d72f-81b2-44b0-b474-fac6735fcb1c" (UID: "3f65d72f-81b2-44b0-b474-fac6735fcb1c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.295976 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3f65d72f-81b2-44b0-b474-fac6735fcb1c-buildcachedir\") pod \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.296000 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-blob-cache\") pod \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\" (UID: \"3f65d72f-81b2-44b0-b474-fac6735fcb1c\") " Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.296014 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "3f65d72f-81b2-44b0-b474-fac6735fcb1c" (UID: "3f65d72f-81b2-44b0-b474-fac6735fcb1c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.296032 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f65d72f-81b2-44b0-b474-fac6735fcb1c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "3f65d72f-81b2-44b0-b474-fac6735fcb1c" (UID: "3f65d72f-81b2-44b0-b474-fac6735fcb1c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.296221 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "3f65d72f-81b2-44b0-b474-fac6735fcb1c" (UID: "3f65d72f-81b2-44b0-b474-fac6735fcb1c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.296353 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "3f65d72f-81b2-44b0-b474-fac6735fcb1c" (UID: "3f65d72f-81b2-44b0-b474-fac6735fcb1c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.296362 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "3f65d72f-81b2-44b0-b474-fac6735fcb1c" (UID: "3f65d72f-81b2-44b0-b474-fac6735fcb1c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.296591 5112 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3f65d72f-81b2-44b0-b474-fac6735fcb1c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.296620 5112 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3f65d72f-81b2-44b0-b474-fac6735fcb1c-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.296633 5112 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.296645 5112 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.296659 5112 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.296675 5112 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f65d72f-81b2-44b0-b474-fac6735fcb1c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.296688 5112 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.300307 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "3f65d72f-81b2-44b0-b474-fac6735fcb1c" (UID: "3f65d72f-81b2-44b0-b474-fac6735fcb1c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.300381 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "3f65d72f-81b2-44b0-b474-fac6735fcb1c" (UID: "3f65d72f-81b2-44b0-b474-fac6735fcb1c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.303423 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f65d72f-81b2-44b0-b474-fac6735fcb1c-builder-dockercfg-c6q5x-push" (OuterVolumeSpecName: "builder-dockercfg-c6q5x-push") pod "3f65d72f-81b2-44b0-b474-fac6735fcb1c" (UID: "3f65d72f-81b2-44b0-b474-fac6735fcb1c"). InnerVolumeSpecName "builder-dockercfg-c6q5x-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.305736 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f65d72f-81b2-44b0-b474-fac6735fcb1c-kube-api-access-jtvhb" (OuterVolumeSpecName: "kube-api-access-jtvhb") pod "3f65d72f-81b2-44b0-b474-fac6735fcb1c" (UID: "3f65d72f-81b2-44b0-b474-fac6735fcb1c"). InnerVolumeSpecName "kube-api-access-jtvhb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.314292 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f65d72f-81b2-44b0-b474-fac6735fcb1c-builder-dockercfg-c6q5x-pull" (OuterVolumeSpecName: "builder-dockercfg-c6q5x-pull") pod "3f65d72f-81b2-44b0-b474-fac6735fcb1c" (UID: "3f65d72f-81b2-44b0-b474-fac6735fcb1c"). InnerVolumeSpecName "builder-dockercfg-c6q5x-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.325474 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aa1750c-350f-4e7a-aae9-2e877bce69cb" path="/var/lib/kubelet/pods/7aa1750c-350f-4e7a-aae9-2e877bce69cb/volumes" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.397734 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jtvhb\" (UniqueName: \"kubernetes.io/projected/3f65d72f-81b2-44b0-b474-fac6735fcb1c-kube-api-access-jtvhb\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.397770 5112 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/3f65d72f-81b2-44b0-b474-fac6735fcb1c-builder-dockercfg-c6q5x-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.397780 5112 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.397790 5112 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/3f65d72f-81b2-44b0-b474-fac6735fcb1c-builder-dockercfg-c6q5x-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.397800 5112 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3f65d72f-81b2-44b0-b474-fac6735fcb1c-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.749539 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_3f65d72f-81b2-44b0-b474-fac6735fcb1c/git-clone/0.log" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.749626 5112 generic.go:358] "Generic (PLEG): container finished" podID="3f65d72f-81b2-44b0-b474-fac6735fcb1c" containerID="906e84e8233cfc26d5501b806251ed4765da78008510ae7be1edb438ce9edcd8" exitCode=1 Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.749783 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"3f65d72f-81b2-44b0-b474-fac6735fcb1c","Type":"ContainerDied","Data":"906e84e8233cfc26d5501b806251ed4765da78008510ae7be1edb438ce9edcd8"} Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.749791 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.749872 5112 scope.go:117] "RemoveContainer" containerID="906e84e8233cfc26d5501b806251ed4765da78008510ae7be1edb438ce9edcd8" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.749853 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"3f65d72f-81b2-44b0-b474-fac6735fcb1c","Type":"ContainerDied","Data":"6e75cb578328d976f25c12847444ccc9b56b20a74f44c19e27abe6c54c1a4e0a"} Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.778153 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.779731 5112 scope.go:117] "RemoveContainer" containerID="906e84e8233cfc26d5501b806251ed4765da78008510ae7be1edb438ce9edcd8" Dec 08 17:55:15 crc kubenswrapper[5112]: E1208 17:55:15.780364 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"906e84e8233cfc26d5501b806251ed4765da78008510ae7be1edb438ce9edcd8\": container with ID starting with 906e84e8233cfc26d5501b806251ed4765da78008510ae7be1edb438ce9edcd8 not found: ID does not exist" containerID="906e84e8233cfc26d5501b806251ed4765da78008510ae7be1edb438ce9edcd8" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.780418 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"906e84e8233cfc26d5501b806251ed4765da78008510ae7be1edb438ce9edcd8"} err="failed to get container status \"906e84e8233cfc26d5501b806251ed4765da78008510ae7be1edb438ce9edcd8\": rpc error: code = NotFound desc = could not find container \"906e84e8233cfc26d5501b806251ed4765da78008510ae7be1edb438ce9edcd8\": container with ID starting with 906e84e8233cfc26d5501b806251ed4765da78008510ae7be1edb438ce9edcd8 not found: ID does not exist" Dec 08 17:55:15 crc kubenswrapper[5112]: I1208 17:55:15.787066 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 17:55:17 crc kubenswrapper[5112]: I1208 17:55:17.325741 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f65d72f-81b2-44b0-b474-fac6735fcb1c" path="/var/lib/kubelet/pods/3f65d72f-81b2-44b0-b474-fac6735fcb1c/volumes" Dec 08 17:55:19 crc kubenswrapper[5112]: I1208 17:55:19.781775 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21" containerName="elasticsearch" probeResult="failure" output=< Dec 08 17:55:19 crc kubenswrapper[5112]: {"timestamp": "2025-12-08T17:55:19+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 17:55:19 crc kubenswrapper[5112]: > Dec 08 17:55:23 crc kubenswrapper[5112]: I1208 17:55:23.627638 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kvv4v_288ee203-be3f-4176-90b2-7d95ee47aee8/kube-multus/0.log" Dec 08 17:55:23 crc kubenswrapper[5112]: I1208 17:55:23.629769 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kvv4v_288ee203-be3f-4176-90b2-7d95ee47aee8/kube-multus/0.log" Dec 08 17:55:23 crc kubenswrapper[5112]: I1208 17:55:23.637877 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:55:23 crc kubenswrapper[5112]: I1208 17:55:23.639491 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:55:24 crc kubenswrapper[5112]: I1208 17:55:24.794615 5112 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="299fbfb1-bb19-4e9c-a4b9-8d1dd1bbae21" containerName="elasticsearch" probeResult="failure" output=< Dec 08 17:55:24 crc kubenswrapper[5112]: {"timestamp": "2025-12-08T17:55:24+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 17:55:24 crc kubenswrapper[5112]: > Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.312394 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.313975 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7aa1750c-350f-4e7a-aae9-2e877bce69cb" containerName="manage-dockerfile" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.314011 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa1750c-350f-4e7a-aae9-2e877bce69cb" containerName="manage-dockerfile" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.314047 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f65d72f-81b2-44b0-b474-fac6735fcb1c" containerName="git-clone" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.314061 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f65d72f-81b2-44b0-b474-fac6735fcb1c" containerName="git-clone" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.314318 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="7aa1750c-350f-4e7a-aae9-2e877bce69cb" containerName="manage-dockerfile" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.314361 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f65d72f-81b2-44b0-b474-fac6735fcb1c" containerName="git-clone" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.650513 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.651423 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.654412 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-sys-config\"" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.654563 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-c6q5x\"" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.654762 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-global-ca\"" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.654923 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-ca\"" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.755890 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjq6b\" (UniqueName: \"kubernetes.io/projected/430025fe-fef5-4c4f-9e02-db81563ffbd9-kube-api-access-xjq6b\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.755982 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.756026 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.756121 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/430025fe-fef5-4c4f-9e02-db81563ffbd9-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.756152 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.756193 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/430025fe-fef5-4c4f-9e02-db81563ffbd9-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.756265 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/430025fe-fef5-4c4f-9e02-db81563ffbd9-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.756298 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/430025fe-fef5-4c4f-9e02-db81563ffbd9-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.756329 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.756401 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.756434 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.756496 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.858964 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859053 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859190 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/430025fe-fef5-4c4f-9e02-db81563ffbd9-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859220 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859303 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/430025fe-fef5-4c4f-9e02-db81563ffbd9-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859342 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/430025fe-fef5-4c4f-9e02-db81563ffbd9-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859395 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/430025fe-fef5-4c4f-9e02-db81563ffbd9-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859443 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/430025fe-fef5-4c4f-9e02-db81563ffbd9-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859478 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859498 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859569 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859602 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859671 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859793 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xjq6b\" (UniqueName: \"kubernetes.io/projected/430025fe-fef5-4c4f-9e02-db81563ffbd9-kube-api-access-xjq6b\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859904 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.859568 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/430025fe-fef5-4c4f-9e02-db81563ffbd9-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.860129 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.860223 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.860407 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.860632 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.860652 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.865147 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/430025fe-fef5-4c4f-9e02-db81563ffbd9-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.865147 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/430025fe-fef5-4c4f-9e02-db81563ffbd9-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.878855 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjq6b\" (UniqueName: \"kubernetes.io/projected/430025fe-fef5-4c4f-9e02-db81563ffbd9-kube-api-access-xjq6b\") pod \"service-telemetry-operator-3-build\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:25 crc kubenswrapper[5112]: I1208 17:55:25.970222 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:26 crc kubenswrapper[5112]: I1208 17:55:26.195226 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 17:55:26 crc kubenswrapper[5112]: I1208 17:55:26.833117 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"430025fe-fef5-4c4f-9e02-db81563ffbd9","Type":"ContainerStarted","Data":"acc363879ddc591be590408ec3eac287685c0c932ec2dbf5d336c4c32f65e7bb"} Dec 08 17:55:26 crc kubenswrapper[5112]: I1208 17:55:26.833540 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"430025fe-fef5-4c4f-9e02-db81563ffbd9","Type":"ContainerStarted","Data":"81e294a7c4f2d5a9044fa0267286094a5bacaac42de55f8b05457832b804b50c"} Dec 08 17:55:26 crc kubenswrapper[5112]: I1208 17:55:26.901109 5112 ???:1] "http: TLS handshake error from 192.168.126.11:50074: no serving certificate available for the kubelet" Dec 08 17:55:27 crc kubenswrapper[5112]: I1208 17:55:27.950255 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 17:55:28 crc kubenswrapper[5112]: I1208 17:55:28.845523 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-3-build" podUID="430025fe-fef5-4c4f-9e02-db81563ffbd9" containerName="git-clone" containerID="cri-o://acc363879ddc591be590408ec3eac287685c0c932ec2dbf5d336c4c32f65e7bb" gracePeriod=30 Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.782627 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_430025fe-fef5-4c4f-9e02-db81563ffbd9/git-clone/0.log" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.783006 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.870439 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_430025fe-fef5-4c4f-9e02-db81563ffbd9/git-clone/0.log" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.870488 5112 generic.go:358] "Generic (PLEG): container finished" podID="430025fe-fef5-4c4f-9e02-db81563ffbd9" containerID="acc363879ddc591be590408ec3eac287685c0c932ec2dbf5d336c4c32f65e7bb" exitCode=1 Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.870737 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"430025fe-fef5-4c4f-9e02-db81563ffbd9","Type":"ContainerDied","Data":"acc363879ddc591be590408ec3eac287685c0c932ec2dbf5d336c4c32f65e7bb"} Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.870779 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"430025fe-fef5-4c4f-9e02-db81563ffbd9","Type":"ContainerDied","Data":"81e294a7c4f2d5a9044fa0267286094a5bacaac42de55f8b05457832b804b50c"} Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.870800 5112 scope.go:117] "RemoveContainer" containerID="acc363879ddc591be590408ec3eac287685c0c932ec2dbf5d336c4c32f65e7bb" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.870959 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.898996 5112 scope.go:117] "RemoveContainer" containerID="acc363879ddc591be590408ec3eac287685c0c932ec2dbf5d336c4c32f65e7bb" Dec 08 17:55:29 crc kubenswrapper[5112]: E1208 17:55:29.899640 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc363879ddc591be590408ec3eac287685c0c932ec2dbf5d336c4c32f65e7bb\": container with ID starting with acc363879ddc591be590408ec3eac287685c0c932ec2dbf5d336c4c32f65e7bb not found: ID does not exist" containerID="acc363879ddc591be590408ec3eac287685c0c932ec2dbf5d336c4c32f65e7bb" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.899720 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc363879ddc591be590408ec3eac287685c0c932ec2dbf5d336c4c32f65e7bb"} err="failed to get container status \"acc363879ddc591be590408ec3eac287685c0c932ec2dbf5d336c4c32f65e7bb\": rpc error: code = NotFound desc = could not find container \"acc363879ddc591be590408ec3eac287685c0c932ec2dbf5d336c4c32f65e7bb\": container with ID starting with acc363879ddc591be590408ec3eac287685c0c932ec2dbf5d336c4c32f65e7bb not found: ID does not exist" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.922484 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/430025fe-fef5-4c4f-9e02-db81563ffbd9-node-pullsecrets\") pod \"430025fe-fef5-4c4f-9e02-db81563ffbd9\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.922558 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-blob-cache\") pod \"430025fe-fef5-4c4f-9e02-db81563ffbd9\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.922596 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-system-configs\") pod \"430025fe-fef5-4c4f-9e02-db81563ffbd9\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.922615 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-container-storage-root\") pod \"430025fe-fef5-4c4f-9e02-db81563ffbd9\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.922652 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-buildworkdir\") pod \"430025fe-fef5-4c4f-9e02-db81563ffbd9\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.922696 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjq6b\" (UniqueName: \"kubernetes.io/projected/430025fe-fef5-4c4f-9e02-db81563ffbd9-kube-api-access-xjq6b\") pod \"430025fe-fef5-4c4f-9e02-db81563ffbd9\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.922722 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-container-storage-run\") pod \"430025fe-fef5-4c4f-9e02-db81563ffbd9\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.922768 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/430025fe-fef5-4c4f-9e02-db81563ffbd9-buildcachedir\") pod \"430025fe-fef5-4c4f-9e02-db81563ffbd9\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.922788 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-ca-bundles\") pod \"430025fe-fef5-4c4f-9e02-db81563ffbd9\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.922826 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-proxy-ca-bundles\") pod \"430025fe-fef5-4c4f-9e02-db81563ffbd9\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.922860 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/430025fe-fef5-4c4f-9e02-db81563ffbd9-builder-dockercfg-c6q5x-push\") pod \"430025fe-fef5-4c4f-9e02-db81563ffbd9\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.922878 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/430025fe-fef5-4c4f-9e02-db81563ffbd9-builder-dockercfg-c6q5x-pull\") pod \"430025fe-fef5-4c4f-9e02-db81563ffbd9\" (UID: \"430025fe-fef5-4c4f-9e02-db81563ffbd9\") " Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.922706 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/430025fe-fef5-4c4f-9e02-db81563ffbd9-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "430025fe-fef5-4c4f-9e02-db81563ffbd9" (UID: "430025fe-fef5-4c4f-9e02-db81563ffbd9"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.923278 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/430025fe-fef5-4c4f-9e02-db81563ffbd9-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "430025fe-fef5-4c4f-9e02-db81563ffbd9" (UID: "430025fe-fef5-4c4f-9e02-db81563ffbd9"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.923046 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "430025fe-fef5-4c4f-9e02-db81563ffbd9" (UID: "430025fe-fef5-4c4f-9e02-db81563ffbd9"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.923141 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "430025fe-fef5-4c4f-9e02-db81563ffbd9" (UID: "430025fe-fef5-4c4f-9e02-db81563ffbd9"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.923576 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "430025fe-fef5-4c4f-9e02-db81563ffbd9" (UID: "430025fe-fef5-4c4f-9e02-db81563ffbd9"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.923585 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "430025fe-fef5-4c4f-9e02-db81563ffbd9" (UID: "430025fe-fef5-4c4f-9e02-db81563ffbd9"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.923915 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "430025fe-fef5-4c4f-9e02-db81563ffbd9" (UID: "430025fe-fef5-4c4f-9e02-db81563ffbd9"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.923743 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "430025fe-fef5-4c4f-9e02-db81563ffbd9" (UID: "430025fe-fef5-4c4f-9e02-db81563ffbd9"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.923883 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "430025fe-fef5-4c4f-9e02-db81563ffbd9" (UID: "430025fe-fef5-4c4f-9e02-db81563ffbd9"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.927914 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430025fe-fef5-4c4f-9e02-db81563ffbd9-kube-api-access-xjq6b" (OuterVolumeSpecName: "kube-api-access-xjq6b") pod "430025fe-fef5-4c4f-9e02-db81563ffbd9" (UID: "430025fe-fef5-4c4f-9e02-db81563ffbd9"). InnerVolumeSpecName "kube-api-access-xjq6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.928069 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430025fe-fef5-4c4f-9e02-db81563ffbd9-builder-dockercfg-c6q5x-pull" (OuterVolumeSpecName: "builder-dockercfg-c6q5x-pull") pod "430025fe-fef5-4c4f-9e02-db81563ffbd9" (UID: "430025fe-fef5-4c4f-9e02-db81563ffbd9"). InnerVolumeSpecName "builder-dockercfg-c6q5x-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:29 crc kubenswrapper[5112]: I1208 17:55:29.930152 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430025fe-fef5-4c4f-9e02-db81563ffbd9-builder-dockercfg-c6q5x-push" (OuterVolumeSpecName: "builder-dockercfg-c6q5x-push") pod "430025fe-fef5-4c4f-9e02-db81563ffbd9" (UID: "430025fe-fef5-4c4f-9e02-db81563ffbd9"). InnerVolumeSpecName "builder-dockercfg-c6q5x-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.024685 5112 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/430025fe-fef5-4c4f-9e02-db81563ffbd9-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.024730 5112 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.024745 5112 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.024759 5112 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/430025fe-fef5-4c4f-9e02-db81563ffbd9-builder-dockercfg-c6q5x-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.024770 5112 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/430025fe-fef5-4c4f-9e02-db81563ffbd9-builder-dockercfg-c6q5x-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.024781 5112 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/430025fe-fef5-4c4f-9e02-db81563ffbd9-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.024792 5112 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.024802 5112 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/430025fe-fef5-4c4f-9e02-db81563ffbd9-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.024815 5112 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.024828 5112 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.024840 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xjq6b\" (UniqueName: \"kubernetes.io/projected/430025fe-fef5-4c4f-9e02-db81563ffbd9-kube-api-access-xjq6b\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.024850 5112 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/430025fe-fef5-4c4f-9e02-db81563ffbd9-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.212568 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.225741 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 17:55:30 crc kubenswrapper[5112]: I1208 17:55:30.395046 5112 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:31 crc kubenswrapper[5112]: I1208 17:55:31.325965 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="430025fe-fef5-4c4f-9e02-db81563ffbd9" path="/var/lib/kubelet/pods/430025fe-fef5-4c4f-9e02-db81563ffbd9/volumes" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.418756 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.420823 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="430025fe-fef5-4c4f-9e02-db81563ffbd9" containerName="git-clone" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.420916 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="430025fe-fef5-4c4f-9e02-db81563ffbd9" containerName="git-clone" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.421102 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="430025fe-fef5-4c4f-9e02-db81563ffbd9" containerName="git-clone" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.677788 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.677927 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.680845 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-ca\"" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.681164 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-c6q5x\"" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.682874 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-global-ca\"" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.683160 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-sys-config\"" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.756275 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.756654 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpmcw\" (UniqueName: \"kubernetes.io/projected/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-kube-api-access-mpmcw\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.756702 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.756733 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.756753 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.756781 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.756803 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.756827 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.756876 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.756920 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.756975 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.757006 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.858768 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.859059 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.859230 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.859347 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.859457 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.859557 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mpmcw\" (UniqueName: \"kubernetes.io/projected/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-kube-api-access-mpmcw\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.859672 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.859784 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.859917 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.860016 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.860114 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.860210 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.860305 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.860344 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.860822 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.860955 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.860214 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.859487 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.861401 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.861466 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.862910 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.866238 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.873656 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.878437 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpmcw\" (UniqueName: \"kubernetes.io/projected/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-kube-api-access-mpmcw\") pod \"service-telemetry-operator-4-build\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:39 crc kubenswrapper[5112]: I1208 17:55:39.998245 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:40 crc kubenswrapper[5112]: I1208 17:55:40.564227 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 17:55:40 crc kubenswrapper[5112]: I1208 17:55:40.963538 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799","Type":"ContainerStarted","Data":"8ccedf5d0cb22ca2d14558437e89e65dd6dd3243b76b6c49cd358fcbc482470d"} Dec 08 17:55:40 crc kubenswrapper[5112]: I1208 17:55:40.964127 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799","Type":"ContainerStarted","Data":"595af4d0b16b01aae040abb095b34cf07cb8c9bcf17abf9546357af859cf6e69"} Dec 08 17:55:41 crc kubenswrapper[5112]: I1208 17:55:41.021563 5112 ???:1] "http: TLS handshake error from 192.168.126.11:51552: no serving certificate available for the kubelet" Dec 08 17:55:42 crc kubenswrapper[5112]: I1208 17:55:42.047876 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 17:55:42 crc kubenswrapper[5112]: I1208 17:55:42.978096 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-4-build" podUID="5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" containerName="git-clone" containerID="cri-o://8ccedf5d0cb22ca2d14558437e89e65dd6dd3243b76b6c49cd358fcbc482470d" gracePeriod=30 Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.440370 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_5e46fbd1-9a5d-45be-94a0-c5e0b0b35799/git-clone/0.log" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.440844 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.525930 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-buildworkdir\") pod \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.525995 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-node-pullsecrets\") pod \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.526140 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-builder-dockercfg-c6q5x-push\") pod \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.526161 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" (UID: "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.526177 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-builder-dockercfg-c6q5x-pull\") pod \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.526252 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-proxy-ca-bundles\") pod \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.526371 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-blob-cache\") pod \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.526434 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-buildcachedir\") pod \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.526463 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-ca-bundles\") pod \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.526489 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-system-configs\") pod \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.526544 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-container-storage-root\") pod \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.526568 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpmcw\" (UniqueName: \"kubernetes.io/projected/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-kube-api-access-mpmcw\") pod \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.526578 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" (UID: "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.526695 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-container-storage-run\") pod \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\" (UID: \"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799\") " Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.526912 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" (UID: "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.527058 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" (UID: "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.527206 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" (UID: "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.527369 5112 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.527395 5112 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.527408 5112 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.527420 5112 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.527431 5112 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.527428 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" (UID: "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.527822 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" (UID: "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.527872 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" (UID: "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.527923 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" (UID: "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.532333 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-kube-api-access-mpmcw" (OuterVolumeSpecName: "kube-api-access-mpmcw") pod "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" (UID: "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799"). InnerVolumeSpecName "kube-api-access-mpmcw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.532414 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-builder-dockercfg-c6q5x-pull" (OuterVolumeSpecName: "builder-dockercfg-c6q5x-pull") pod "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" (UID: "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799"). InnerVolumeSpecName "builder-dockercfg-c6q5x-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.533263 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-builder-dockercfg-c6q5x-push" (OuterVolumeSpecName: "builder-dockercfg-c6q5x-push") pod "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" (UID: "5e46fbd1-9a5d-45be-94a0-c5e0b0b35799"). InnerVolumeSpecName "builder-dockercfg-c6q5x-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.628468 5112 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.628509 5112 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.628520 5112 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-builder-dockercfg-c6q5x-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.628529 5112 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-builder-dockercfg-c6q5x-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.628541 5112 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.628551 5112 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.628560 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mpmcw\" (UniqueName: \"kubernetes.io/projected/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799-kube-api-access-mpmcw\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.985995 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_5e46fbd1-9a5d-45be-94a0-c5e0b0b35799/git-clone/0.log" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.986042 5112 generic.go:358] "Generic (PLEG): container finished" podID="5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" containerID="8ccedf5d0cb22ca2d14558437e89e65dd6dd3243b76b6c49cd358fcbc482470d" exitCode=1 Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.986189 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799","Type":"ContainerDied","Data":"8ccedf5d0cb22ca2d14558437e89e65dd6dd3243b76b6c49cd358fcbc482470d"} Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.986222 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"5e46fbd1-9a5d-45be-94a0-c5e0b0b35799","Type":"ContainerDied","Data":"595af4d0b16b01aae040abb095b34cf07cb8c9bcf17abf9546357af859cf6e69"} Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.986248 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:55:43 crc kubenswrapper[5112]: I1208 17:55:43.986238 5112 scope.go:117] "RemoveContainer" containerID="8ccedf5d0cb22ca2d14558437e89e65dd6dd3243b76b6c49cd358fcbc482470d" Dec 08 17:55:44 crc kubenswrapper[5112]: I1208 17:55:44.009981 5112 scope.go:117] "RemoveContainer" containerID="8ccedf5d0cb22ca2d14558437e89e65dd6dd3243b76b6c49cd358fcbc482470d" Dec 08 17:55:44 crc kubenswrapper[5112]: E1208 17:55:44.010436 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ccedf5d0cb22ca2d14558437e89e65dd6dd3243b76b6c49cd358fcbc482470d\": container with ID starting with 8ccedf5d0cb22ca2d14558437e89e65dd6dd3243b76b6c49cd358fcbc482470d not found: ID does not exist" containerID="8ccedf5d0cb22ca2d14558437e89e65dd6dd3243b76b6c49cd358fcbc482470d" Dec 08 17:55:44 crc kubenswrapper[5112]: I1208 17:55:44.010471 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ccedf5d0cb22ca2d14558437e89e65dd6dd3243b76b6c49cd358fcbc482470d"} err="failed to get container status \"8ccedf5d0cb22ca2d14558437e89e65dd6dd3243b76b6c49cd358fcbc482470d\": rpc error: code = NotFound desc = could not find container \"8ccedf5d0cb22ca2d14558437e89e65dd6dd3243b76b6c49cd358fcbc482470d\": container with ID starting with 8ccedf5d0cb22ca2d14558437e89e65dd6dd3243b76b6c49cd358fcbc482470d not found: ID does not exist" Dec 08 17:55:44 crc kubenswrapper[5112]: I1208 17:55:44.038006 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 17:55:44 crc kubenswrapper[5112]: I1208 17:55:44.044938 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 17:55:45 crc kubenswrapper[5112]: I1208 17:55:45.324258 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" path="/var/lib/kubelet/pods/5e46fbd1-9a5d-45be-94a0-c5e0b0b35799/volumes" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.512221 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.513364 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" containerName="git-clone" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.513377 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" containerName="git-clone" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.513504 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="5e46fbd1-9a5d-45be-94a0-c5e0b0b35799" containerName="git-clone" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.540321 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.540501 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.544746 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-sys-config\"" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.544934 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-ca\"" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.545155 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-c6q5x\"" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.545512 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-global-ca\"" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.564244 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.564289 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3156539-90c0-49a7-90e9-7d6e5b0a471a-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.564316 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3156539-90c0-49a7-90e9-7d6e5b0a471a-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.564343 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/e3156539-90c0-49a7-90e9-7d6e5b0a471a-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.564366 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.564385 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.564406 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.564431 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.564482 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/e3156539-90c0-49a7-90e9-7d6e5b0a471a-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.564503 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.564526 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.564609 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g42l7\" (UniqueName: \"kubernetes.io/projected/e3156539-90c0-49a7-90e9-7d6e5b0a471a-kube-api-access-g42l7\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.665774 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g42l7\" (UniqueName: \"kubernetes.io/projected/e3156539-90c0-49a7-90e9-7d6e5b0a471a-kube-api-access-g42l7\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.665873 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.665914 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3156539-90c0-49a7-90e9-7d6e5b0a471a-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.665953 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3156539-90c0-49a7-90e9-7d6e5b0a471a-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.666197 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3156539-90c0-49a7-90e9-7d6e5b0a471a-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.666246 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3156539-90c0-49a7-90e9-7d6e5b0a471a-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.666349 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/e3156539-90c0-49a7-90e9-7d6e5b0a471a-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.666473 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.666819 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.667292 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.667749 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.667818 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.667890 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.668461 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.668567 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.668862 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/e3156539-90c0-49a7-90e9-7d6e5b0a471a-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.670374 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.669227 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.670707 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.671459 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.673468 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/e3156539-90c0-49a7-90e9-7d6e5b0a471a-builder-dockercfg-c6q5x-push\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.674697 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.681759 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/e3156539-90c0-49a7-90e9-7d6e5b0a471a-builder-dockercfg-c6q5x-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.698843 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g42l7\" (UniqueName: \"kubernetes.io/projected/e3156539-90c0-49a7-90e9-7d6e5b0a471a-kube-api-access-g42l7\") pod \"service-telemetry-operator-5-build\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:53 crc kubenswrapper[5112]: I1208 17:55:53.861548 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:54 crc kubenswrapper[5112]: I1208 17:55:54.074222 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 17:55:55 crc kubenswrapper[5112]: I1208 17:55:55.065043 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"e3156539-90c0-49a7-90e9-7d6e5b0a471a","Type":"ContainerStarted","Data":"d658d8dd6c5bbae61504d349a00f1b699d0abaccfc4d77498f6440eae090bae4"} Dec 08 17:55:55 crc kubenswrapper[5112]: I1208 17:55:55.065131 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"e3156539-90c0-49a7-90e9-7d6e5b0a471a","Type":"ContainerStarted","Data":"e50ff4915d69594489f48030a5045f1aedb4bba5df6fdffb482a53f767d873a5"} Dec 08 17:55:55 crc kubenswrapper[5112]: I1208 17:55:55.138827 5112 ???:1] "http: TLS handshake error from 192.168.126.11:53930: no serving certificate available for the kubelet" Dec 08 17:55:56 crc kubenswrapper[5112]: I1208 17:55:56.165570 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.092055 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-5-build" podUID="e3156539-90c0-49a7-90e9-7d6e5b0a471a" containerName="git-clone" containerID="cri-o://d658d8dd6c5bbae61504d349a00f1b699d0abaccfc4d77498f6440eae090bae4" gracePeriod=30 Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.468123 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_e3156539-90c0-49a7-90e9-7d6e5b0a471a/git-clone/0.log" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.468329 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.524472 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-container-storage-root\") pod \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.524573 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3156539-90c0-49a7-90e9-7d6e5b0a471a-node-pullsecrets\") pod \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.524640 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-container-storage-run\") pod \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.524695 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-buildworkdir\") pod \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.524690 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e3156539-90c0-49a7-90e9-7d6e5b0a471a" (UID: "e3156539-90c0-49a7-90e9-7d6e5b0a471a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.524707 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3156539-90c0-49a7-90e9-7d6e5b0a471a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e3156539-90c0-49a7-90e9-7d6e5b0a471a" (UID: "e3156539-90c0-49a7-90e9-7d6e5b0a471a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.524715 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g42l7\" (UniqueName: \"kubernetes.io/projected/e3156539-90c0-49a7-90e9-7d6e5b0a471a-kube-api-access-g42l7\") pod \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.524801 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3156539-90c0-49a7-90e9-7d6e5b0a471a-buildcachedir\") pod \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.524843 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-ca-bundles\") pod \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.524900 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/e3156539-90c0-49a7-90e9-7d6e5b0a471a-builder-dockercfg-c6q5x-pull\") pod \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.524910 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e3156539-90c0-49a7-90e9-7d6e5b0a471a" (UID: "e3156539-90c0-49a7-90e9-7d6e5b0a471a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.524928 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/e3156539-90c0-49a7-90e9-7d6e5b0a471a-builder-dockercfg-c6q5x-push\") pod \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.524993 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-blob-cache\") pod \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.525028 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-proxy-ca-bundles\") pod \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.525123 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-system-configs\") pod \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\" (UID: \"e3156539-90c0-49a7-90e9-7d6e5b0a471a\") " Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.525443 5112 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.525459 5112 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e3156539-90c0-49a7-90e9-7d6e5b0a471a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.525469 5112 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.525770 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e3156539-90c0-49a7-90e9-7d6e5b0a471a" (UID: "e3156539-90c0-49a7-90e9-7d6e5b0a471a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.525978 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e3156539-90c0-49a7-90e9-7d6e5b0a471a" (UID: "e3156539-90c0-49a7-90e9-7d6e5b0a471a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.526157 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e3156539-90c0-49a7-90e9-7d6e5b0a471a" (UID: "e3156539-90c0-49a7-90e9-7d6e5b0a471a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.526279 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3156539-90c0-49a7-90e9-7d6e5b0a471a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e3156539-90c0-49a7-90e9-7d6e5b0a471a" (UID: "e3156539-90c0-49a7-90e9-7d6e5b0a471a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.526490 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e3156539-90c0-49a7-90e9-7d6e5b0a471a" (UID: "e3156539-90c0-49a7-90e9-7d6e5b0a471a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.531420 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3156539-90c0-49a7-90e9-7d6e5b0a471a-kube-api-access-g42l7" (OuterVolumeSpecName: "kube-api-access-g42l7") pod "e3156539-90c0-49a7-90e9-7d6e5b0a471a" (UID: "e3156539-90c0-49a7-90e9-7d6e5b0a471a"). InnerVolumeSpecName "kube-api-access-g42l7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.531730 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e3156539-90c0-49a7-90e9-7d6e5b0a471a" (UID: "e3156539-90c0-49a7-90e9-7d6e5b0a471a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.532015 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3156539-90c0-49a7-90e9-7d6e5b0a471a-builder-dockercfg-c6q5x-push" (OuterVolumeSpecName: "builder-dockercfg-c6q5x-push") pod "e3156539-90c0-49a7-90e9-7d6e5b0a471a" (UID: "e3156539-90c0-49a7-90e9-7d6e5b0a471a"). InnerVolumeSpecName "builder-dockercfg-c6q5x-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.533328 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3156539-90c0-49a7-90e9-7d6e5b0a471a-builder-dockercfg-c6q5x-pull" (OuterVolumeSpecName: "builder-dockercfg-c6q5x-pull") pod "e3156539-90c0-49a7-90e9-7d6e5b0a471a" (UID: "e3156539-90c0-49a7-90e9-7d6e5b0a471a"). InnerVolumeSpecName "builder-dockercfg-c6q5x-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.626252 5112 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.626294 5112 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.626307 5112 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.626319 5112 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e3156539-90c0-49a7-90e9-7d6e5b0a471a-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.626330 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g42l7\" (UniqueName: \"kubernetes.io/projected/e3156539-90c0-49a7-90e9-7d6e5b0a471a-kube-api-access-g42l7\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.626341 5112 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e3156539-90c0-49a7-90e9-7d6e5b0a471a-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.626351 5112 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e3156539-90c0-49a7-90e9-7d6e5b0a471a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.626362 5112 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-c6q5x-pull\" (UniqueName: \"kubernetes.io/secret/e3156539-90c0-49a7-90e9-7d6e5b0a471a-builder-dockercfg-c6q5x-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:57 crc kubenswrapper[5112]: I1208 17:55:57.626374 5112 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-c6q5x-push\" (UniqueName: \"kubernetes.io/secret/e3156539-90c0-49a7-90e9-7d6e5b0a471a-builder-dockercfg-c6q5x-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:58 crc kubenswrapper[5112]: I1208 17:55:58.099741 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_e3156539-90c0-49a7-90e9-7d6e5b0a471a/git-clone/0.log" Dec 08 17:55:58 crc kubenswrapper[5112]: I1208 17:55:58.099786 5112 generic.go:358] "Generic (PLEG): container finished" podID="e3156539-90c0-49a7-90e9-7d6e5b0a471a" containerID="d658d8dd6c5bbae61504d349a00f1b699d0abaccfc4d77498f6440eae090bae4" exitCode=1 Dec 08 17:55:58 crc kubenswrapper[5112]: I1208 17:55:58.099879 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:55:58 crc kubenswrapper[5112]: I1208 17:55:58.099876 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"e3156539-90c0-49a7-90e9-7d6e5b0a471a","Type":"ContainerDied","Data":"d658d8dd6c5bbae61504d349a00f1b699d0abaccfc4d77498f6440eae090bae4"} Dec 08 17:55:58 crc kubenswrapper[5112]: I1208 17:55:58.100062 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"e3156539-90c0-49a7-90e9-7d6e5b0a471a","Type":"ContainerDied","Data":"e50ff4915d69594489f48030a5045f1aedb4bba5df6fdffb482a53f767d873a5"} Dec 08 17:55:58 crc kubenswrapper[5112]: I1208 17:55:58.100175 5112 scope.go:117] "RemoveContainer" containerID="d658d8dd6c5bbae61504d349a00f1b699d0abaccfc4d77498f6440eae090bae4" Dec 08 17:55:58 crc kubenswrapper[5112]: I1208 17:55:58.126760 5112 scope.go:117] "RemoveContainer" containerID="d658d8dd6c5bbae61504d349a00f1b699d0abaccfc4d77498f6440eae090bae4" Dec 08 17:55:58 crc kubenswrapper[5112]: E1208 17:55:58.127471 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d658d8dd6c5bbae61504d349a00f1b699d0abaccfc4d77498f6440eae090bae4\": container with ID starting with d658d8dd6c5bbae61504d349a00f1b699d0abaccfc4d77498f6440eae090bae4 not found: ID does not exist" containerID="d658d8dd6c5bbae61504d349a00f1b699d0abaccfc4d77498f6440eae090bae4" Dec 08 17:55:58 crc kubenswrapper[5112]: I1208 17:55:58.127533 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d658d8dd6c5bbae61504d349a00f1b699d0abaccfc4d77498f6440eae090bae4"} err="failed to get container status \"d658d8dd6c5bbae61504d349a00f1b699d0abaccfc4d77498f6440eae090bae4\": rpc error: code = NotFound desc = could not find container \"d658d8dd6c5bbae61504d349a00f1b699d0abaccfc4d77498f6440eae090bae4\": container with ID starting with d658d8dd6c5bbae61504d349a00f1b699d0abaccfc4d77498f6440eae090bae4 not found: ID does not exist" Dec 08 17:55:58 crc kubenswrapper[5112]: I1208 17:55:58.130592 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 17:55:58 crc kubenswrapper[5112]: I1208 17:55:58.136884 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 17:55:59 crc kubenswrapper[5112]: I1208 17:55:59.328451 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3156539-90c0-49a7-90e9-7d6e5b0a471a" path="/var/lib/kubelet/pods/e3156539-90c0-49a7-90e9-7d6e5b0a471a/volumes" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.189294 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kskqb/must-gather-btzpl"] Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.190981 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e3156539-90c0-49a7-90e9-7d6e5b0a471a" containerName="git-clone" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.191003 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3156539-90c0-49a7-90e9-7d6e5b0a471a" containerName="git-clone" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.191215 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="e3156539-90c0-49a7-90e9-7d6e5b0a471a" containerName="git-clone" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.201901 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kskqb/must-gather-btzpl"] Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.202106 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kskqb/must-gather-btzpl" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.205553 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-kskqb\"/\"openshift-service-ca.crt\"" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.206021 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-kskqb\"/\"default-dockercfg-gkttf\"" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.215472 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-kskqb\"/\"kube-root-ca.crt\"" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.289720 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cffcde95-4628-4fe6-aeb3-a2185a31e795-must-gather-output\") pod \"must-gather-btzpl\" (UID: \"cffcde95-4628-4fe6-aeb3-a2185a31e795\") " pod="openshift-must-gather-kskqb/must-gather-btzpl" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.290002 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8db2\" (UniqueName: \"kubernetes.io/projected/cffcde95-4628-4fe6-aeb3-a2185a31e795-kube-api-access-l8db2\") pod \"must-gather-btzpl\" (UID: \"cffcde95-4628-4fe6-aeb3-a2185a31e795\") " pod="openshift-must-gather-kskqb/must-gather-btzpl" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.391797 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l8db2\" (UniqueName: \"kubernetes.io/projected/cffcde95-4628-4fe6-aeb3-a2185a31e795-kube-api-access-l8db2\") pod \"must-gather-btzpl\" (UID: \"cffcde95-4628-4fe6-aeb3-a2185a31e795\") " pod="openshift-must-gather-kskqb/must-gather-btzpl" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.391909 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cffcde95-4628-4fe6-aeb3-a2185a31e795-must-gather-output\") pod \"must-gather-btzpl\" (UID: \"cffcde95-4628-4fe6-aeb3-a2185a31e795\") " pod="openshift-must-gather-kskqb/must-gather-btzpl" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.392713 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cffcde95-4628-4fe6-aeb3-a2185a31e795-must-gather-output\") pod \"must-gather-btzpl\" (UID: \"cffcde95-4628-4fe6-aeb3-a2185a31e795\") " pod="openshift-must-gather-kskqb/must-gather-btzpl" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.437592 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8db2\" (UniqueName: \"kubernetes.io/projected/cffcde95-4628-4fe6-aeb3-a2185a31e795-kube-api-access-l8db2\") pod \"must-gather-btzpl\" (UID: \"cffcde95-4628-4fe6-aeb3-a2185a31e795\") " pod="openshift-must-gather-kskqb/must-gather-btzpl" Dec 08 17:56:45 crc kubenswrapper[5112]: I1208 17:56:45.525469 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kskqb/must-gather-btzpl" Dec 08 17:56:46 crc kubenswrapper[5112]: I1208 17:56:46.107211 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kskqb/must-gather-btzpl"] Dec 08 17:56:46 crc kubenswrapper[5112]: I1208 17:56:46.153758 5112 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:56:46 crc kubenswrapper[5112]: I1208 17:56:46.454864 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kskqb/must-gather-btzpl" event={"ID":"cffcde95-4628-4fe6-aeb3-a2185a31e795","Type":"ContainerStarted","Data":"a127b40779fbd9b80ccad75710f03f21af3eda343023bed04a1518ea3c60715d"} Dec 08 17:56:58 crc kubenswrapper[5112]: I1208 17:56:58.748891 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kskqb/must-gather-btzpl" event={"ID":"cffcde95-4628-4fe6-aeb3-a2185a31e795","Type":"ContainerStarted","Data":"c88d3cd6a97ddabc5a169c23c9969fe364bd946c7c538ca91c67755553d2f7c7"} Dec 08 17:56:58 crc kubenswrapper[5112]: I1208 17:56:58.749480 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kskqb/must-gather-btzpl" event={"ID":"cffcde95-4628-4fe6-aeb3-a2185a31e795","Type":"ContainerStarted","Data":"c9a3e356f095e0a3efa8b3209d14d4961af03ff9b4445a58b5de813580977dda"} Dec 08 17:56:58 crc kubenswrapper[5112]: I1208 17:56:58.768873 5112 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kskqb/must-gather-btzpl" podStartSLOduration=2.235490307 podStartE2EDuration="13.76885424s" podCreationTimestamp="2025-12-08 17:56:45 +0000 UTC" firstStartedPulling="2025-12-08 17:56:46.154036385 +0000 UTC m=+983.163585086" lastFinishedPulling="2025-12-08 17:56:57.687400318 +0000 UTC m=+994.696949019" observedRunningTime="2025-12-08 17:56:58.766594999 +0000 UTC m=+995.776143710" watchObservedRunningTime="2025-12-08 17:56:58.76885424 +0000 UTC m=+995.778402941" Dec 08 17:57:05 crc kubenswrapper[5112]: I1208 17:57:05.183715 5112 ???:1] "http: TLS handshake error from 192.168.126.11:52730: no serving certificate available for the kubelet" Dec 08 17:57:11 crc kubenswrapper[5112]: I1208 17:57:11.707035 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:57:11 crc kubenswrapper[5112]: I1208 17:57:11.707937 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:57:14 crc kubenswrapper[5112]: E1208 17:57:14.335252 5112 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 08 17:57:16 crc kubenswrapper[5112]: I1208 17:57:16.391637 5112 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 17:57:16 crc kubenswrapper[5112]: I1208 17:57:16.401266 5112 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:57:16 crc kubenswrapper[5112]: I1208 17:57:16.418696 5112 ???:1] "http: TLS handshake error from 192.168.126.11:45270: no serving certificate available for the kubelet" Dec 08 17:57:16 crc kubenswrapper[5112]: I1208 17:57:16.444052 5112 ???:1] "http: TLS handshake error from 192.168.126.11:45280: no serving certificate available for the kubelet" Dec 08 17:57:16 crc kubenswrapper[5112]: I1208 17:57:16.472628 5112 ???:1] "http: TLS handshake error from 192.168.126.11:45286: no serving certificate available for the kubelet" Dec 08 17:57:16 crc kubenswrapper[5112]: I1208 17:57:16.513048 5112 ???:1] "http: TLS handshake error from 192.168.126.11:45294: no serving certificate available for the kubelet" Dec 08 17:57:16 crc kubenswrapper[5112]: I1208 17:57:16.571173 5112 ???:1] "http: TLS handshake error from 192.168.126.11:45304: no serving certificate available for the kubelet" Dec 08 17:57:16 crc kubenswrapper[5112]: I1208 17:57:16.671551 5112 ???:1] "http: TLS handshake error from 192.168.126.11:45312: no serving certificate available for the kubelet" Dec 08 17:57:16 crc kubenswrapper[5112]: I1208 17:57:16.857421 5112 ???:1] "http: TLS handshake error from 192.168.126.11:45318: no serving certificate available for the kubelet" Dec 08 17:57:17 crc kubenswrapper[5112]: I1208 17:57:17.202605 5112 ???:1] "http: TLS handshake error from 192.168.126.11:45320: no serving certificate available for the kubelet" Dec 08 17:57:17 crc kubenswrapper[5112]: I1208 17:57:17.872565 5112 ???:1] "http: TLS handshake error from 192.168.126.11:39760: no serving certificate available for the kubelet" Dec 08 17:57:19 crc kubenswrapper[5112]: I1208 17:57:19.176443 5112 ???:1] "http: TLS handshake error from 192.168.126.11:39772: no serving certificate available for the kubelet" Dec 08 17:57:21 crc kubenswrapper[5112]: I1208 17:57:21.761476 5112 ???:1] "http: TLS handshake error from 192.168.126.11:39774: no serving certificate available for the kubelet" Dec 08 17:57:26 crc kubenswrapper[5112]: I1208 17:57:26.905749 5112 ???:1] "http: TLS handshake error from 192.168.126.11:39778: no serving certificate available for the kubelet" Dec 08 17:57:37 crc kubenswrapper[5112]: I1208 17:57:37.168422 5112 ???:1] "http: TLS handshake error from 192.168.126.11:59522: no serving certificate available for the kubelet" Dec 08 17:57:37 crc kubenswrapper[5112]: I1208 17:57:37.218070 5112 ???:1] "http: TLS handshake error from 192.168.126.11:59538: no serving certificate available for the kubelet" Dec 08 17:57:37 crc kubenswrapper[5112]: I1208 17:57:37.335302 5112 ???:1] "http: TLS handshake error from 192.168.126.11:59552: no serving certificate available for the kubelet" Dec 08 17:57:37 crc kubenswrapper[5112]: I1208 17:57:37.378455 5112 ???:1] "http: TLS handshake error from 192.168.126.11:59554: no serving certificate available for the kubelet" Dec 08 17:57:41 crc kubenswrapper[5112]: I1208 17:57:41.706452 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:57:41 crc kubenswrapper[5112]: I1208 17:57:41.706809 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:57:48 crc kubenswrapper[5112]: I1208 17:57:48.510023 5112 ???:1] "http: TLS handshake error from 192.168.126.11:43520: no serving certificate available for the kubelet" Dec 08 17:57:48 crc kubenswrapper[5112]: I1208 17:57:48.720859 5112 ???:1] "http: TLS handshake error from 192.168.126.11:43532: no serving certificate available for the kubelet" Dec 08 17:57:48 crc kubenswrapper[5112]: I1208 17:57:48.729936 5112 ???:1] "http: TLS handshake error from 192.168.126.11:43534: no serving certificate available for the kubelet" Dec 08 17:57:57 crc kubenswrapper[5112]: I1208 17:57:57.674392 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33566: no serving certificate available for the kubelet" Dec 08 17:58:04 crc kubenswrapper[5112]: I1208 17:58:04.842771 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33582: no serving certificate available for the kubelet" Dec 08 17:58:04 crc kubenswrapper[5112]: I1208 17:58:04.967240 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33584: no serving certificate available for the kubelet" Dec 08 17:58:04 crc kubenswrapper[5112]: I1208 17:58:04.986398 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33596: no serving certificate available for the kubelet" Dec 08 17:58:05 crc kubenswrapper[5112]: I1208 17:58:05.030300 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33608: no serving certificate available for the kubelet" Dec 08 17:58:05 crc kubenswrapper[5112]: I1208 17:58:05.229295 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33614: no serving certificate available for the kubelet" Dec 08 17:58:05 crc kubenswrapper[5112]: I1208 17:58:05.249283 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33622: no serving certificate available for the kubelet" Dec 08 17:58:05 crc kubenswrapper[5112]: I1208 17:58:05.264939 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33636: no serving certificate available for the kubelet" Dec 08 17:58:05 crc kubenswrapper[5112]: I1208 17:58:05.410441 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33644: no serving certificate available for the kubelet" Dec 08 17:58:05 crc kubenswrapper[5112]: I1208 17:58:05.578033 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33658: no serving certificate available for the kubelet" Dec 08 17:58:05 crc kubenswrapper[5112]: I1208 17:58:05.596666 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33672: no serving certificate available for the kubelet" Dec 08 17:58:05 crc kubenswrapper[5112]: I1208 17:58:05.598106 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33676: no serving certificate available for the kubelet" Dec 08 17:58:05 crc kubenswrapper[5112]: I1208 17:58:05.888731 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33682: no serving certificate available for the kubelet" Dec 08 17:58:05 crc kubenswrapper[5112]: I1208 17:58:05.911103 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33684: no serving certificate available for the kubelet" Dec 08 17:58:05 crc kubenswrapper[5112]: I1208 17:58:05.911601 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33692: no serving certificate available for the kubelet" Dec 08 17:58:06 crc kubenswrapper[5112]: I1208 17:58:06.049191 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33696: no serving certificate available for the kubelet" Dec 08 17:58:06 crc kubenswrapper[5112]: I1208 17:58:06.389337 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33712: no serving certificate available for the kubelet" Dec 08 17:58:06 crc kubenswrapper[5112]: I1208 17:58:06.433367 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33728: no serving certificate available for the kubelet" Dec 08 17:58:06 crc kubenswrapper[5112]: I1208 17:58:06.444656 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33740: no serving certificate available for the kubelet" Dec 08 17:58:06 crc kubenswrapper[5112]: I1208 17:58:06.629677 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33746: no serving certificate available for the kubelet" Dec 08 17:58:06 crc kubenswrapper[5112]: I1208 17:58:06.630883 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33756: no serving certificate available for the kubelet" Dec 08 17:58:06 crc kubenswrapper[5112]: I1208 17:58:06.633596 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33758: no serving certificate available for the kubelet" Dec 08 17:58:06 crc kubenswrapper[5112]: I1208 17:58:06.799271 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33768: no serving certificate available for the kubelet" Dec 08 17:58:06 crc kubenswrapper[5112]: I1208 17:58:06.942642 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33784: no serving certificate available for the kubelet" Dec 08 17:58:06 crc kubenswrapper[5112]: I1208 17:58:06.955618 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33792: no serving certificate available for the kubelet" Dec 08 17:58:06 crc kubenswrapper[5112]: I1208 17:58:06.962127 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33802: no serving certificate available for the kubelet" Dec 08 17:58:07 crc kubenswrapper[5112]: I1208 17:58:07.125542 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33818: no serving certificate available for the kubelet" Dec 08 17:58:07 crc kubenswrapper[5112]: I1208 17:58:07.151572 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33828: no serving certificate available for the kubelet" Dec 08 17:58:07 crc kubenswrapper[5112]: I1208 17:58:07.156748 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33836: no serving certificate available for the kubelet" Dec 08 17:58:07 crc kubenswrapper[5112]: I1208 17:58:07.295705 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33852: no serving certificate available for the kubelet" Dec 08 17:58:07 crc kubenswrapper[5112]: I1208 17:58:07.448471 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33856: no serving certificate available for the kubelet" Dec 08 17:58:07 crc kubenswrapper[5112]: I1208 17:58:07.454951 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33858: no serving certificate available for the kubelet" Dec 08 17:58:07 crc kubenswrapper[5112]: I1208 17:58:07.459553 5112 ???:1] "http: TLS handshake error from 192.168.126.11:33860: no serving certificate available for the kubelet" Dec 08 17:58:07 crc kubenswrapper[5112]: I1208 17:58:07.646738 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34114: no serving certificate available for the kubelet" Dec 08 17:58:07 crc kubenswrapper[5112]: I1208 17:58:07.651409 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34116: no serving certificate available for the kubelet" Dec 08 17:58:07 crc kubenswrapper[5112]: I1208 17:58:07.657296 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34130: no serving certificate available for the kubelet" Dec 08 17:58:07 crc kubenswrapper[5112]: I1208 17:58:07.818501 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34144: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.169910 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34150: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.183714 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34162: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.186587 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34170: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.316994 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34176: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.331273 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34184: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.341747 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34188: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.376459 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34198: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.501987 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34208: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.787416 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34222: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.787819 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34224: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.817260 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34230: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.962902 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34244: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.963472 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34256: no serving certificate available for the kubelet" Dec 08 17:58:08 crc kubenswrapper[5112]: I1208 17:58:08.984802 5112 ???:1] "http: TLS handshake error from 192.168.126.11:34258: no serving certificate available for the kubelet" Dec 08 17:58:11 crc kubenswrapper[5112]: I1208 17:58:11.706926 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:58:11 crc kubenswrapper[5112]: I1208 17:58:11.707357 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:58:11 crc kubenswrapper[5112]: I1208 17:58:11.707419 5112 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" Dec 08 17:58:11 crc kubenswrapper[5112]: I1208 17:58:11.708157 5112 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7df939746405f1e31b0b6c600b41c2ef2f32d550ab3e537995db11242c570dc3"} pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:58:11 crc kubenswrapper[5112]: I1208 17:58:11.708238 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" containerID="cri-o://7df939746405f1e31b0b6c600b41c2ef2f32d550ab3e537995db11242c570dc3" gracePeriod=600 Dec 08 17:58:12 crc kubenswrapper[5112]: I1208 17:58:12.386285 5112 generic.go:358] "Generic (PLEG): container finished" podID="95e46da0-94bb-4d22-804b-b3018984cdac" containerID="7df939746405f1e31b0b6c600b41c2ef2f32d550ab3e537995db11242c570dc3" exitCode=0 Dec 08 17:58:12 crc kubenswrapper[5112]: I1208 17:58:12.386501 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" event={"ID":"95e46da0-94bb-4d22-804b-b3018984cdac","Type":"ContainerDied","Data":"7df939746405f1e31b0b6c600b41c2ef2f32d550ab3e537995db11242c570dc3"} Dec 08 17:58:12 crc kubenswrapper[5112]: I1208 17:58:12.386661 5112 scope.go:117] "RemoveContainer" containerID="240b1d29409d9f35aedfce10e5ba170d923c2b90de94cecbc02a5feba56821b7" Dec 08 17:58:13 crc kubenswrapper[5112]: I1208 17:58:13.397052 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" event={"ID":"95e46da0-94bb-4d22-804b-b3018984cdac","Type":"ContainerStarted","Data":"14af1fffadd19c4b907e8ff0c72fa53b4484106ff3caf5f6d014e70ba5144b07"} Dec 08 17:58:19 crc kubenswrapper[5112]: I1208 17:58:19.941796 5112 ???:1] "http: TLS handshake error from 192.168.126.11:41012: no serving certificate available for the kubelet" Dec 08 17:58:20 crc kubenswrapper[5112]: I1208 17:58:20.067824 5112 ???:1] "http: TLS handshake error from 192.168.126.11:41020: no serving certificate available for the kubelet" Dec 08 17:58:20 crc kubenswrapper[5112]: I1208 17:58:20.114504 5112 ???:1] "http: TLS handshake error from 192.168.126.11:41030: no serving certificate available for the kubelet" Dec 08 17:58:20 crc kubenswrapper[5112]: I1208 17:58:20.242780 5112 ???:1] "http: TLS handshake error from 192.168.126.11:41046: no serving certificate available for the kubelet" Dec 08 17:58:20 crc kubenswrapper[5112]: I1208 17:58:20.290123 5112 ???:1] "http: TLS handshake error from 192.168.126.11:41048: no serving certificate available for the kubelet" Dec 08 17:58:38 crc kubenswrapper[5112]: I1208 17:58:38.662036 5112 ???:1] "http: TLS handshake error from 192.168.126.11:55928: no serving certificate available for the kubelet" Dec 08 17:58:59 crc kubenswrapper[5112]: I1208 17:58:59.738169 5112 generic.go:358] "Generic (PLEG): container finished" podID="cffcde95-4628-4fe6-aeb3-a2185a31e795" containerID="c9a3e356f095e0a3efa8b3209d14d4961af03ff9b4445a58b5de813580977dda" exitCode=0 Dec 08 17:58:59 crc kubenswrapper[5112]: I1208 17:58:59.738301 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kskqb/must-gather-btzpl" event={"ID":"cffcde95-4628-4fe6-aeb3-a2185a31e795","Type":"ContainerDied","Data":"c9a3e356f095e0a3efa8b3209d14d4961af03ff9b4445a58b5de813580977dda"} Dec 08 17:58:59 crc kubenswrapper[5112]: I1208 17:58:59.739683 5112 scope.go:117] "RemoveContainer" containerID="c9a3e356f095e0a3efa8b3209d14d4961af03ff9b4445a58b5de813580977dda" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.204501 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46508: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.364337 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46516: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.380905 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46520: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.416763 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46522: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.430856 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46534: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.449616 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46548: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.464207 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46550: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.481840 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46560: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.496974 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46564: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.662836 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46580: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.674310 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46582: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.698179 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46584: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.709382 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46594: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.725948 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46606: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.736927 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46620: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.748505 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46636: no serving certificate available for the kubelet" Dec 08 17:59:05 crc kubenswrapper[5112]: I1208 17:59:05.756731 5112 ???:1] "http: TLS handshake error from 192.168.126.11:46648: no serving certificate available for the kubelet" Dec 08 17:59:10 crc kubenswrapper[5112]: I1208 17:59:10.806650 5112 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kskqb/must-gather-btzpl"] Dec 08 17:59:10 crc kubenswrapper[5112]: I1208 17:59:10.806976 5112 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-kskqb/must-gather-btzpl" podUID="cffcde95-4628-4fe6-aeb3-a2185a31e795" containerName="copy" containerID="cri-o://c88d3cd6a97ddabc5a169c23c9969fe364bd946c7c538ca91c67755553d2f7c7" gracePeriod=2 Dec 08 17:59:10 crc kubenswrapper[5112]: I1208 17:59:10.810632 5112 status_manager.go:895] "Failed to get status for pod" podUID="cffcde95-4628-4fe6-aeb3-a2185a31e795" pod="openshift-must-gather-kskqb/must-gather-btzpl" err="pods \"must-gather-btzpl\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-kskqb\": no relationship found between node 'crc' and this object" Dec 08 17:59:10 crc kubenswrapper[5112]: I1208 17:59:10.815019 5112 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kskqb/must-gather-btzpl"] Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.209347 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kskqb_must-gather-btzpl_cffcde95-4628-4fe6-aeb3-a2185a31e795/copy/0.log" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.209954 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kskqb/must-gather-btzpl" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.211372 5112 status_manager.go:895] "Failed to get status for pod" podUID="cffcde95-4628-4fe6-aeb3-a2185a31e795" pod="openshift-must-gather-kskqb/must-gather-btzpl" err="pods \"must-gather-btzpl\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-kskqb\": no relationship found between node 'crc' and this object" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.316345 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8db2\" (UniqueName: \"kubernetes.io/projected/cffcde95-4628-4fe6-aeb3-a2185a31e795-kube-api-access-l8db2\") pod \"cffcde95-4628-4fe6-aeb3-a2185a31e795\" (UID: \"cffcde95-4628-4fe6-aeb3-a2185a31e795\") " Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.316972 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cffcde95-4628-4fe6-aeb3-a2185a31e795-must-gather-output\") pod \"cffcde95-4628-4fe6-aeb3-a2185a31e795\" (UID: \"cffcde95-4628-4fe6-aeb3-a2185a31e795\") " Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.328513 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cffcde95-4628-4fe6-aeb3-a2185a31e795-kube-api-access-l8db2" (OuterVolumeSpecName: "kube-api-access-l8db2") pod "cffcde95-4628-4fe6-aeb3-a2185a31e795" (UID: "cffcde95-4628-4fe6-aeb3-a2185a31e795"). InnerVolumeSpecName "kube-api-access-l8db2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.365331 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cffcde95-4628-4fe6-aeb3-a2185a31e795-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cffcde95-4628-4fe6-aeb3-a2185a31e795" (UID: "cffcde95-4628-4fe6-aeb3-a2185a31e795"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.418879 5112 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cffcde95-4628-4fe6-aeb3-a2185a31e795-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.418915 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l8db2\" (UniqueName: \"kubernetes.io/projected/cffcde95-4628-4fe6-aeb3-a2185a31e795-kube-api-access-l8db2\") on node \"crc\" DevicePath \"\"" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.829599 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kskqb_must-gather-btzpl_cffcde95-4628-4fe6-aeb3-a2185a31e795/copy/0.log" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.832005 5112 generic.go:358] "Generic (PLEG): container finished" podID="cffcde95-4628-4fe6-aeb3-a2185a31e795" containerID="c88d3cd6a97ddabc5a169c23c9969fe364bd946c7c538ca91c67755553d2f7c7" exitCode=143 Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.832111 5112 scope.go:117] "RemoveContainer" containerID="c88d3cd6a97ddabc5a169c23c9969fe364bd946c7c538ca91c67755553d2f7c7" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.832122 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kskqb/must-gather-btzpl" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.856138 5112 scope.go:117] "RemoveContainer" containerID="c9a3e356f095e0a3efa8b3209d14d4961af03ff9b4445a58b5de813580977dda" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.926326 5112 scope.go:117] "RemoveContainer" containerID="c88d3cd6a97ddabc5a169c23c9969fe364bd946c7c538ca91c67755553d2f7c7" Dec 08 17:59:11 crc kubenswrapper[5112]: E1208 17:59:11.926844 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c88d3cd6a97ddabc5a169c23c9969fe364bd946c7c538ca91c67755553d2f7c7\": container with ID starting with c88d3cd6a97ddabc5a169c23c9969fe364bd946c7c538ca91c67755553d2f7c7 not found: ID does not exist" containerID="c88d3cd6a97ddabc5a169c23c9969fe364bd946c7c538ca91c67755553d2f7c7" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.926905 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c88d3cd6a97ddabc5a169c23c9969fe364bd946c7c538ca91c67755553d2f7c7"} err="failed to get container status \"c88d3cd6a97ddabc5a169c23c9969fe364bd946c7c538ca91c67755553d2f7c7\": rpc error: code = NotFound desc = could not find container \"c88d3cd6a97ddabc5a169c23c9969fe364bd946c7c538ca91c67755553d2f7c7\": container with ID starting with c88d3cd6a97ddabc5a169c23c9969fe364bd946c7c538ca91c67755553d2f7c7 not found: ID does not exist" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.926932 5112 scope.go:117] "RemoveContainer" containerID="c9a3e356f095e0a3efa8b3209d14d4961af03ff9b4445a58b5de813580977dda" Dec 08 17:59:11 crc kubenswrapper[5112]: E1208 17:59:11.927217 5112 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9a3e356f095e0a3efa8b3209d14d4961af03ff9b4445a58b5de813580977dda\": container with ID starting with c9a3e356f095e0a3efa8b3209d14d4961af03ff9b4445a58b5de813580977dda not found: ID does not exist" containerID="c9a3e356f095e0a3efa8b3209d14d4961af03ff9b4445a58b5de813580977dda" Dec 08 17:59:11 crc kubenswrapper[5112]: I1208 17:59:11.927260 5112 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9a3e356f095e0a3efa8b3209d14d4961af03ff9b4445a58b5de813580977dda"} err="failed to get container status \"c9a3e356f095e0a3efa8b3209d14d4961af03ff9b4445a58b5de813580977dda\": rpc error: code = NotFound desc = could not find container \"c9a3e356f095e0a3efa8b3209d14d4961af03ff9b4445a58b5de813580977dda\": container with ID starting with c9a3e356f095e0a3efa8b3209d14d4961af03ff9b4445a58b5de813580977dda not found: ID does not exist" Dec 08 17:59:13 crc kubenswrapper[5112]: I1208 17:59:13.326476 5112 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cffcde95-4628-4fe6-aeb3-a2185a31e795" path="/var/lib/kubelet/pods/cffcde95-4628-4fe6-aeb3-a2185a31e795/volumes" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.140950 5112 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6"] Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.142570 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cffcde95-4628-4fe6-aeb3-a2185a31e795" containerName="copy" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.142586 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="cffcde95-4628-4fe6-aeb3-a2185a31e795" containerName="copy" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.142605 5112 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cffcde95-4628-4fe6-aeb3-a2185a31e795" containerName="gather" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.142612 5112 state_mem.go:107] "Deleted CPUSet assignment" podUID="cffcde95-4628-4fe6-aeb3-a2185a31e795" containerName="gather" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.142720 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="cffcde95-4628-4fe6-aeb3-a2185a31e795" containerName="copy" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.142731 5112 memory_manager.go:356] "RemoveStaleState removing state" podUID="cffcde95-4628-4fe6-aeb3-a2185a31e795" containerName="gather" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.606521 5112 ???:1] "http: TLS handshake error from 192.168.126.11:55774: no serving certificate available for the kubelet" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.703582 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6"] Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.704116 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.708419 5112 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.708743 5112 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.784965 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17b2ad43-887b-4030-a2eb-88516ad3f4d1-secret-volume\") pod \"collect-profiles-29420280-dxwg6\" (UID: \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.785206 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mlnf\" (UniqueName: \"kubernetes.io/projected/17b2ad43-887b-4030-a2eb-88516ad3f4d1-kube-api-access-7mlnf\") pod \"collect-profiles-29420280-dxwg6\" (UID: \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.785239 5112 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17b2ad43-887b-4030-a2eb-88516ad3f4d1-config-volume\") pod \"collect-profiles-29420280-dxwg6\" (UID: \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.886446 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17b2ad43-887b-4030-a2eb-88516ad3f4d1-secret-volume\") pod \"collect-profiles-29420280-dxwg6\" (UID: \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.886792 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7mlnf\" (UniqueName: \"kubernetes.io/projected/17b2ad43-887b-4030-a2eb-88516ad3f4d1-kube-api-access-7mlnf\") pod \"collect-profiles-29420280-dxwg6\" (UID: \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.886989 5112 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17b2ad43-887b-4030-a2eb-88516ad3f4d1-config-volume\") pod \"collect-profiles-29420280-dxwg6\" (UID: \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.887990 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17b2ad43-887b-4030-a2eb-88516ad3f4d1-config-volume\") pod \"collect-profiles-29420280-dxwg6\" (UID: \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.895826 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17b2ad43-887b-4030-a2eb-88516ad3f4d1-secret-volume\") pod \"collect-profiles-29420280-dxwg6\" (UID: \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" Dec 08 18:00:00 crc kubenswrapper[5112]: I1208 18:00:00.906003 5112 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mlnf\" (UniqueName: \"kubernetes.io/projected/17b2ad43-887b-4030-a2eb-88516ad3f4d1-kube-api-access-7mlnf\") pod \"collect-profiles-29420280-dxwg6\" (UID: \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" Dec 08 18:00:01 crc kubenswrapper[5112]: I1208 18:00:01.032317 5112 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" Dec 08 18:00:01 crc kubenswrapper[5112]: I1208 18:00:01.219964 5112 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6"] Dec 08 18:00:02 crc kubenswrapper[5112]: I1208 18:00:02.189606 5112 generic.go:358] "Generic (PLEG): container finished" podID="17b2ad43-887b-4030-a2eb-88516ad3f4d1" containerID="7f31a163668b7f49d3fe3717ab2f756d38cae07c01a1e46c8c9bfed7a7d0104c" exitCode=0 Dec 08 18:00:02 crc kubenswrapper[5112]: I1208 18:00:02.189698 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" event={"ID":"17b2ad43-887b-4030-a2eb-88516ad3f4d1","Type":"ContainerDied","Data":"7f31a163668b7f49d3fe3717ab2f756d38cae07c01a1e46c8c9bfed7a7d0104c"} Dec 08 18:00:02 crc kubenswrapper[5112]: I1208 18:00:02.189945 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" event={"ID":"17b2ad43-887b-4030-a2eb-88516ad3f4d1","Type":"ContainerStarted","Data":"a15fa1105cb8f5ad10043747f21bbc95c3b700464b78359f2d5cccea64ee317d"} Dec 08 18:00:03 crc kubenswrapper[5112]: I1208 18:00:03.460061 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" Dec 08 18:00:03 crc kubenswrapper[5112]: I1208 18:00:03.525338 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mlnf\" (UniqueName: \"kubernetes.io/projected/17b2ad43-887b-4030-a2eb-88516ad3f4d1-kube-api-access-7mlnf\") pod \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\" (UID: \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\") " Dec 08 18:00:03 crc kubenswrapper[5112]: I1208 18:00:03.525442 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17b2ad43-887b-4030-a2eb-88516ad3f4d1-secret-volume\") pod \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\" (UID: \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\") " Dec 08 18:00:03 crc kubenswrapper[5112]: I1208 18:00:03.525546 5112 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17b2ad43-887b-4030-a2eb-88516ad3f4d1-config-volume\") pod \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\" (UID: \"17b2ad43-887b-4030-a2eb-88516ad3f4d1\") " Dec 08 18:00:03 crc kubenswrapper[5112]: I1208 18:00:03.526575 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b2ad43-887b-4030-a2eb-88516ad3f4d1-config-volume" (OuterVolumeSpecName: "config-volume") pod "17b2ad43-887b-4030-a2eb-88516ad3f4d1" (UID: "17b2ad43-887b-4030-a2eb-88516ad3f4d1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:00:03 crc kubenswrapper[5112]: I1208 18:00:03.534634 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17b2ad43-887b-4030-a2eb-88516ad3f4d1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "17b2ad43-887b-4030-a2eb-88516ad3f4d1" (UID: "17b2ad43-887b-4030-a2eb-88516ad3f4d1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:00:03 crc kubenswrapper[5112]: I1208 18:00:03.534963 5112 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b2ad43-887b-4030-a2eb-88516ad3f4d1-kube-api-access-7mlnf" (OuterVolumeSpecName: "kube-api-access-7mlnf") pod "17b2ad43-887b-4030-a2eb-88516ad3f4d1" (UID: "17b2ad43-887b-4030-a2eb-88516ad3f4d1"). InnerVolumeSpecName "kube-api-access-7mlnf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:00:03 crc kubenswrapper[5112]: I1208 18:00:03.627259 5112 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7mlnf\" (UniqueName: \"kubernetes.io/projected/17b2ad43-887b-4030-a2eb-88516ad3f4d1-kube-api-access-7mlnf\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:03 crc kubenswrapper[5112]: I1208 18:00:03.627302 5112 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17b2ad43-887b-4030-a2eb-88516ad3f4d1-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:03 crc kubenswrapper[5112]: I1208 18:00:03.627315 5112 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17b2ad43-887b-4030-a2eb-88516ad3f4d1-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:04 crc kubenswrapper[5112]: I1208 18:00:04.204782 5112 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" event={"ID":"17b2ad43-887b-4030-a2eb-88516ad3f4d1","Type":"ContainerDied","Data":"a15fa1105cb8f5ad10043747f21bbc95c3b700464b78359f2d5cccea64ee317d"} Dec 08 18:00:04 crc kubenswrapper[5112]: I1208 18:00:04.204839 5112 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a15fa1105cb8f5ad10043747f21bbc95c3b700464b78359f2d5cccea64ee317d" Dec 08 18:00:04 crc kubenswrapper[5112]: I1208 18:00:04.204816 5112 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-dxwg6" Dec 08 18:00:23 crc kubenswrapper[5112]: I1208 18:00:23.739616 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kvv4v_288ee203-be3f-4176-90b2-7d95ee47aee8/kube-multus/0.log" Dec 08 18:00:23 crc kubenswrapper[5112]: I1208 18:00:23.741751 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kvv4v_288ee203-be3f-4176-90b2-7d95ee47aee8/kube-multus/0.log" Dec 08 18:00:23 crc kubenswrapper[5112]: I1208 18:00:23.747599 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:00:23 crc kubenswrapper[5112]: I1208 18:00:23.748045 5112 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:00:41 crc kubenswrapper[5112]: I1208 18:00:41.706915 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:00:41 crc kubenswrapper[5112]: I1208 18:00:41.707630 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:01:11 crc kubenswrapper[5112]: I1208 18:01:11.707100 5112 patch_prober.go:28] interesting pod/machine-config-daemon-s6wzf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:01:11 crc kubenswrapper[5112]: I1208 18:01:11.708125 5112 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s6wzf" podUID="95e46da0-94bb-4d22-804b-b3018984cdac" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"